Building Blocks of Memory in the Brain

Поділитися
Вставка
  • Опубліковано 24 лис 2024

КОМЕНТАРІ • 560

  • @ArtemKirsanov
    @ArtemKirsanov  Рік тому +39

    To try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/ArtemKirsanov/.
    The first 200 of you will get 20% off Brilliant’s annual premium subscription.

    • @verlax8956
      @verlax8956 Рік тому

      i cheated the system by pretending to be the administrator of a school and have brilliant for completely free, thanks for the offer tho

    • @andyvill8131
      @andyvill8131 Рік тому

      @@verlax8956 You're a cheater

    • @suheilpinto6964
      @suheilpinto6964 11 місяців тому

      hyperthymesia syndrome how it happens.

    • @mimimo6901
      @mimimo6901 11 місяців тому

      I just want to ask you please can neuroscientist now erase traumatic and fear memories ?? When they're gonna start the clinical trials on humans please if you have any idea answer me please 🙏 thank you

    • @DhanushkaJayasinghe-ib1cd
      @DhanushkaJayasinghe-ib1cd Місяць тому

      vorinostat for fear reduction!

  • @guilhermesantos7355
    @guilhermesantos7355 Рік тому +764

    As a Technology and Neuroscience's undergraduate i can say your videos are not only a scientific work but also one hell of a art piece! Thanks man, greetings from Brazil

    • @ArtemKirsanov
      @ArtemKirsanov  Рік тому +31

      Thank you!

    • @youcer
      @youcer Рік тому +7

      agree

    • @AlintraxAika
      @AlintraxAika Рік тому +2

      Which university are you studying neuroscience?

    • @BrunoSantos-bg8xz
      @BrunoSantos-bg8xz Рік тому +3

      Olha só quem encontrei

    • @guilhermesantos7355
      @guilhermesantos7355 Рік тому +8

      @@BrunoSantos-bg8xz AAAAAAAAAAAAAH NÃO É POSSÍVEL KKKKKKKKKKKKKK Acho que todos os alunos de neuro da UFABC veem o Artem

  • @hackerbrinelam5381
    @hackerbrinelam5381 Рік тому +98

    This is very fansinating, I mean now I know how my brain literally physically learn things, and it makes sense some questions I have on some learning advice, "why do you should learn using most of your senses" "why do you need to focus, pay attention" "why repetition "why you should use your prior exprience to help you to learn" "why do you forget sometimes then remember other times or why you cant retrieve your memory anytime"

    • @subdynoman
      @subdynoman Рік тому +11

      The brain is sensitive, especially to chemical changes...diet and health have the most influence in the physical make up of the body and brain.

    • @egor.okhterov
      @egor.okhterov Рік тому +4

      Don't forget to try teaching someone after you learned something new

  • @joonaskuusisto2767
    @joonaskuusisto2767 Рік тому +141

    I’m studying neuroscience in the context of phase transitions. I sometimes intellectually veer towards AI and general computer science but the brilliancy of your videos rekindles the fire for neuroscience. If only more people with your communication and multimedia skills were involved in neurosci, we’d be marching on towards something marvelous. Public exposure and interest control the funding both in academia and industry, this kind of content has the power to ignite mass movements of brilliant minds.

    • @tedarcher9120
      @tedarcher9120 Рік тому

      Where are you studying? I'm a physicist that wants to move to neuroscience

    • @olavp.4019
      @olavp.4019 Рік тому

      Wollen Wir das Wirklich?
      Ich denke Nein !
      Behalte diese Worte ,für Dein Leben .
      Ciao

    • @MilanMilan0000
      @MilanMilan0000 Рік тому +1

      as someone studying quantum physics, also specifically phase transitions, its interesting learning what phase transition means in other fields

    • @andyvill8131
      @andyvill8131 Рік тому

      exactly

    • @sauravistheascended7161
      @sauravistheascended7161 Рік тому +1

      Do you really believe there is a need and void to fill for this particular type of content? Genuinely curious to know if you really think this and why.

  • @iandanforth
    @iandanforth Рік тому +42

    I had no idea that neuron excitability varied with a period of hours! Such an important piece of the puzzle, thanks for this video.

  • @Anatanomerodi
    @Anatanomerodi Рік тому +71

    I recently discovered your videos, and being a Neuroscience PhD student myself, I want to thank you, your work has re-sparked the motivation to read about topics outside my PhD subject, something I was feeling to do for a long time but never found the energy in the day to day of working. The presentation of the topics is excellent, as well as the edition of the videos, thank you very much for these incredible contributions.

    • @john.8805
      @john.8805 Рік тому +1

      May I ask what you do for work as a Neuroscience PhD? Is it medicine? Ive always wondered.

    • @Anatanomerodi
      @Anatanomerodi Рік тому +3

      @@john.8805 Hello! Sorry, I didn't see your comment. I work on brain-computer interfaces, which are applications that decode brain signals and use them to send commands to a computer or to estimate cognitive processes and inform other applications about the user's mental state

    • @eismccc
      @eismccc 2 місяці тому

      @@Anatanomerodi I am in AI and am very much on my way to incorporating brain-computer interfaces to create bio-feedback loops it'd be cool to bounce ideas, you have an email or something feel like chatting?

    • @Anatanomerodi
      @Anatanomerodi 2 місяці тому

      @@eismccc That would be cool! I don't know how to DM here on youtube and I'd rather not post my email in the comments section tho

  • @allanburns1190
    @allanburns1190 Рік тому +6

    This is one of the best video essays I’ve ever watched on UA-cam

  • @VolodymyrRushchak-k6l
    @VolodymyrRushchak-k6l Рік тому +70

    Man, this channel is a treasure for someone interested in biology and neuroscience. Thanks a lot for your efforts! ❤❤❤

  • @stefanvidenovic5095
    @stefanvidenovic5095 Місяць тому +1

    It actually makes perfect sense that memory will not be stored in just one part of the brain because memory recall is a recreation of an entire 6-sense experience (though in a somewhat faded and less vivid form in most of the cases). An experience is not limited to any one region of the brain, it activates many regions of the brain at the same time.

  • @chenmarkson7413
    @chenmarkson7413 Рік тому +14

    Second-year uni student here (neuroscience major); I feel like I am watching a spoiler and can't stop myself. This is so interesting, learning about all the progress we have on the neuronal basis of learning and memory. Much much much more interesting than the various theoretical memory models I have to memorize in psychology classes!

  • @tinkeringtim7999
    @tinkeringtim7999 Рік тому +32

    I would love to see you take a deep dive into cognative/behavioral relationships to engram learning. A lot of people struggling with trauma related memory issues (inc. PTSD) would likely benefit from understanding how their brains physically learned (and could un-learn). In fact, it seems to me many therapists could also do with knowing more about learning and plasticity.

    • @vachansj
      @vachansj Рік тому +4

      Check out Johannes Graff's research. He talks about the critical window during which a memory can be reupdated to decrease aversion or fear - improving therapy for PTSD. And yes, therapists do know about those concepts, but research into how they can be implemented safely needs more data. For ex: if the reupdating of the memory is not done carefully, it might lead to increase in fear rather than decrease (because you are recalling the fearful memory and not reupdating it to a positive one)

    • @bermagot9238
      @bermagot9238 3 місяці тому

      This is actually the basis of Scientology.

    • @tinkeringtim7999
      @tinkeringtim7999 3 місяці тому +2

      @@bermagot9238 I think that's something you have projected into scientology, rather than it being inherently in the fabric of that framework.

  • @steelex44
    @steelex44 Рік тому +2

    I'm in undergrad, exploring intersections of neuroscience + engineering + psychology, and your channel was/is my first exposure to computational neuroscience. very cool stuff. thank you for your videos and they're so well made!

  • @VaradMahashabde
    @VaradMahashabde Рік тому +20

    I am always surprised by how beginner friendly your videos are.

  • @GeoffryGifari
    @GeoffryGifari Рік тому +5

    wow... so many questions about this one...
    1. Is memory encoded in the structure of neuron interconnections, or the pattern of action potentials buzzing through web of neurons? Given a network pattern of dendrites, axons, and synapses is the memory "still there" even when no signals are being passed?
    2. How can repetition strenghten memory, when talking about the physical connections between neurons?
    3. On gene activation when memory forms, what is the timescale of this process? remembering can be pretty fast.. can genes be expressed (and make lasting changes) just as fast?
    4. How far can we "isolate parts of a memory": with mice fear conditioning, how can we be sure that the pain of shock is linked to the sound stimulus only, instead of sound stimulus + a given position in the lab + objects, shapes, and colors around the mouse at that time + ambient smell +.... , other things that might also be encoded in the engram?
    5. If two different mice went through fear conditioning with the exact same setup, would we see a difference in the engrams of each mouse?
    6. lets say we subject a mouse to fear conditioning, and observe the engram. We then wait for some time until the mouse forgets that experience (weeks? months?). If we do fear conditioning again on the same mouse, would the same engram be formed?
    7. Can the idea of engrams be used to estimate the memory capacity of a brain? we know it can't be infinite because the brain is a physical substrate
    8. Can we induce the growth of new linking neurons between two engrams chemically/biologically? so instead of the mouse retrieving two memories simultaneously and getting those memories linked, we "link" two memories artifically with those two memories have nothing to do with each other before
    9. we know that the brain is not the only component of the central nervous system. Are memories (related to reflexes) encoded in the spinal cord in the same way as they are in the brain?

    • @cosmictreason2242
      @cosmictreason2242 Рік тому +3

      7 is not guaranteed if mind body dualism is true. Then the combination of neuron activation acts as an indexing/lookup function. The combination of millions of neurons is fundamentally 10^1,000,000 and we have billions if not trillions. Even 10^80 would be a memory per atom in the universe

  • @davidyang102
    @davidyang102 Рік тому +19

    The temporary excitability remind me of dropout which is a technique to improve deep learning by turning off neurons randomly. That improves the robustness of the network

    • @ShpanMan
      @ShpanMan Рік тому +4

      Current deep learning is a pale and weak version of biological neurons. We will look back and be amused that we thought this could actually be the right architecture when we have brains all around us and we took almost no inspiration or principle from them.

    • @Smonjirez
      @Smonjirez Рік тому +3

      @@ShpanMan The power of current deep learning certainly does not lie in its architecture but in its scaling ability and ease of use. I doubt more architecturally accurate versions would currently be really useful as they would probably require orders of magnitude more computational resources using currently available technology/hardware.

    • @ShpanMan
      @ShpanMan Рік тому

      @@Smonjirez What are you talking about? Your brain runs on a McDonalds happy meal. You think current Neural networks are more efficient? 🤣

    • @Smonjirez
      @Smonjirez Рік тому +1

      @@ShpanMan Ehm no? I think their current design is more efficient to run on computers.

    • @mattaku9430
      @mattaku9430 Рік тому

      @@ShpanMan Yes, in specialised tasks artificial neurons are way more efficient.

  • @TripImmigration
    @TripImmigration Рік тому +3

    As a scientist and entrepreneur in education field I only can say thank you for this amazing video. Now I have more papers to dive in.
    Subscribe

  • @andrewhooper7603
    @andrewhooper7603 Рік тому +7

    Thanks for the new engram.

  • @cirecrux
    @cirecrux Рік тому +82

    Massive respect for the brain guys who do the brain work

    • @v2ike6udik
      @v2ike6udik Рік тому

      why? to lock you in hell in here? look what they did. most ppl i knew are now empty vessels. on frikking shot and soul is gone.

    • @v2ike6udik
      @v2ike6udik Рік тому

      watch?v=Z4-VyHOQT-k
      cry your hard out, once you understand, what they did.

    • @v2ike6udik
      @v2ike6udik Рік тому

      do you understand, ppl are masturbating to be robots. and most already have.

    • @Andrea-fd2bw
      @Andrea-fd2bw Рік тому +1

      ⁠@@v2ike6udikthe soul can’t be gone,the soul is eternal

    • @v2ike6udik
      @v2ike6udik Рік тому

      @@Andrea-fd2bw disconected soul from spirit becomes basically a demon. soul is "gone".

  • @EMOTIBOTS
    @EMOTIBOTS Рік тому +6

    Hi, really interesting to learn about the waxing and waning of neuron excitability. Makes sense why there's just some things that are easier to process depending on the time of day.
    There's one more thing you can add to the reason why only some neurons are selected for an engram, and that is that when one neuron fires, it raises the action potential of the area outside of its membrane, which in turn locally raises the threshold needed for other neurons to fire. If there are two neurons equal in excitability and one of them happens to fire first, the second one may not fire because of the heightened action potential required. Love watching your videos, very inspiring and well communicated!

  • @GeoffryGifari
    @GeoffryGifari Рік тому +6

    nice job man... one of my top youtube sources for up-to-date neuroscience without dumbing down

  • @cheapshotfishing9239
    @cheapshotfishing9239 Рік тому +4

    New Artem kirsanov vid just dropped, shits gonna be a banger

  • @deschia_
    @deschia_ Рік тому +24

    It's absolutely mind blowing to realize that our brain is basically a highly evolved computer and storage system, and that ultimately computers are starting to evolve like a biological brain

    • @michaelt1775
      @michaelt1775 Рік тому

      😂

    • @narcissesmith9466
      @narcissesmith9466 Рік тому

      Its almost like computers operate like our thinking tendancies...

    • @Anonymous-fr2op
      @Anonymous-fr2op 7 місяців тому +2

      As a NN engineer, i could sense similarities and realized just how much we copy the functionality of the brain without even knowing it😂😂 these are some tricks we do to train our models to catch patterns from seemingly unrelated piles of data

  • @ronaldronald8819
    @ronaldronald8819 Рік тому +3

    This is so interesting. Cheers to you brilliant researchers that figured this stuff out. Thanks for sharing.

  • @GUINTHERKOVALSKI
    @GUINTHERKOVALSKI Рік тому +6

    I would like to see you talk about one topic: biological neurons are capable of making XOR operations. Not only a single neuron is capable, but even the dentrines are. While an artificial neuron is not. Take a look on the paper:
    “Dendritic action potentials and computation in human layer 2/3 cortical neurons”

    • @ArtemKirsanov
      @ArtemKirsanov  Рік тому +6

      Hi! I actually already have a video on this very topic :)
      ua-cam.com/video/hmtQPrH-gC4/v-deo.html

    • @mimimo6901
      @mimimo6901 11 місяців тому

      ​@@ArtemKirsanov so when can neuroscientists erase our fear and painful memories -??

  • @icandreamstream
    @icandreamstream Рік тому +6

    What an achievement this video is, thanks for taking the time to create this.

  • @NeuroDescomplicada
    @NeuroDescomplicada Рік тому +32

    You're one of my favorite educational/scientific youtubers!! Your work inspires me to do better videos in my own language as well as to understand more compreenhesively my work field as a phD student here in Brazil! Do you create your own animations or you have a team that do it?

    • @ArtemKirsanov
      @ArtemKirsanov  Рік тому +11

      Wow, thank you so much! I do everything myself :)

    • @_kantor_
      @_kantor_ Рік тому +1

      What kind of editing program do you use?

  • @mdtanvirahmedsagor6146
    @mdtanvirahmedsagor6146 Рік тому +5

    Literally this channel is a treasure and this video is just a masterpiece ❤

  • @israels9842
    @israels9842 Рік тому +4

    Never end this series please!

  • @marcoramonet1123
    @marcoramonet1123 8 місяців тому +1

    I absolutely adore this. I have asked myself this very question. And the way this is answered is done beautifully. Thank you so much sir!

  • @subendhusarkar2870
    @subendhusarkar2870 Рік тому +7

    I was really missing your videos. Thanks for uploading

    • @ArtemKirsanov
      @ArtemKirsanov  Рік тому +2

      Thank you! Yeah, sorry about that. I was quite busy with finishing my degree and moving countries

  • @emm5468
    @emm5468 Рік тому +9

    If the brain codes parts of a memory in different areas of the brain this might explain why some sounds and smells would bring you back to something like a childhood memory. If differant areas are responsable for different portions of memory then a small triggering of one of those stimuli might cause a cascade of associated brain regions to in response

  • @MarkosDrakos
    @MarkosDrakos Рік тому +23

    Such an amazing video on such an interesting field, thank you for this! I've recently studied a module on engrams and one paper I found really interesting - claiming to have satisfied the engram mimicry criterion - was Vetere et al. (2019) - "Memory formation in the absence of experience". I found this to be the most groundbreaking stuff so far, and the only evidence so far to suggest that mimicry may be possible. I'd love to know your thoughts!
    I'd also love to see a video on the clinically translatable parts of engrams - and the utilisation of the tag and manipulate/erase tools as treatments for OCD and addiction. I also thought this area had some really cool research, and seeing it in video format with your animations and explanations would be really useful!

    • @ArtemKirsanov
      @ArtemKirsanov  Рік тому +5

      Thank you! I’m happy to know you enjoyed it :)
      Hmm, I haven’t encountered this particular paper. Thanks for pointing it out! I’ll take a look

  • @En1Gm4A
    @En1Gm4A Рік тому +5

    Thanks. Not shure what AI designers might do with this information. I think adding the dimension of time and powerlaw activation patterns might boost the capabilities of neural nets

  • @cheapshotfishing9239
    @cheapshotfishing9239 Рік тому +15

    Artem, your videos are the biggest help to me in my quest to create a digital consciousness.

    • @physiologic187
      @physiologic187 Рік тому +3

      I'm curious on how you plan to implement it? Are you trying to engineer some kind of neural network which is structured & functionally organized similarly to the brain?

    • @bitterlemonboy
      @bitterlemonboy Рік тому +10

      lol good luck

    • @nenadnen11111
      @nenadnen11111 Рік тому +6

      @@bitterlemonboy indeed lol

    • @cheapshotfishing9239
      @cheapshotfishing9239 Рік тому +2

      Idk lol I just think if we can create something really really close to how our brain works within a computer, we can understand how we work on a deeper level.
      Thankfully I've got until I die to figure it out.

    • @diadetediotedio6918
      @diadetediotedio6918 Рік тому +2

      You will not succeed with that in digital computers.

  • @jobbimaster
    @jobbimaster Рік тому

    This whole video brings to mind the nature of trauma, how it is ingrained, and ultimately how it can be untangled.

  • @nigtendos
    @nigtendos 9 місяців тому

    As a Biotech, at work I have to design experiments with this kind of train of thought and I see it as part of the routine. This video totally awakens the passion and awe that led me to follow this career, thank you for posting!!

  • @ShpanMan
    @ShpanMan Рік тому +2

    This is a ton of help for me, I am trying to figure out what we know about how the brain works and come up with as many principles that can be converted into artificial neural networks. It's incredible how this graph of nodes and edges can do so much.

  • @ksalarang
    @ksalarang Рік тому +1

    Finally a video on this channel that I could follow the entire time

  • @jonnyschindler3684
    @jonnyschindler3684 Рік тому +1

    Probably the best neuroscience youtuber

  • @reeb3687
    @reeb3687 Рік тому +6

    Do we currently know how brains "check for overlapping" in separate engrams? Also, is it possible for completely unrelated memory clusters to randomly have similar engrams/engram positions, causing them to be intrinsically linked, and, if so, how often/how likely is this to occur?

  • @TheLazyBot
    @TheLazyBot Рік тому

    I am baffled by how simple you’re making this sound. I’ve always been curious how brains work, and binging your videos has totally made it made sense

  • @alexharvey9721
    @alexharvey9721 Рік тому +6

    So good! honestly my favourite channel on UA-cam and the only one I check regularly to see if I've missed any videos. Just keeps getting better!
    Optogenetics really is a field living up to the hype. Incredible tech.
    It would also be interesting to see whether manually setting the engram comes with some cost.

  • @MegaNightdude
    @MegaNightdude Рік тому +3

    Artem, great job. Your presentation is off the charts. I've been doing modeling research on engrams for a couple of years now, but your video was still super informative for me. Thanks!

  • @GabrielCarvv
    @GabrielCarvv Рік тому +4

    Absolutely fabulous video, as always. Maximally interesting content with maximally intuitive animations. Unmatched!

  • @nicholas_obert
    @nicholas_obert Рік тому

    This video is gold. Clean animations and calm voice. It deserves many more views

  • @forthehomies7043
    @forthehomies7043 Рік тому +1

    I can’t wrap my head around memory. Wild stuff.

  • @JandCanO
    @JandCanO Рік тому +1

    We know so much yet so little about the brain. This is a very exciting topic to follow, thanks for the video!

  • @hugocome123
    @hugocome123 Рік тому +3

    Thanks you for this video, I am not usely writing comments but I have to say that you really did an incredible job of pedagogy in this video. Usely I need to see your videos sevral time to understand all and in this one it was so clear that only one is enough.

  • @timothytyree5211
    @timothytyree5211 Рік тому +4

    Great video! Thanks for making it, Artem!

  • @ryiv1848
    @ryiv1848 Рік тому

    This (the linking memory part) is the best explanation I've heard about the brain's principle of contiguity

  • @anyalind4722
    @anyalind4722 Рік тому +12

    Okay, folks. Here's the first comment. I've done
    (Edit):
    Most of the information in the video is familiar to me. But the visualization works great, updating and complementing my knowledge. It’s real piece of art in the popularization genre. Or even like a Disney film for scientists ;)

  • @Bit-while_going
    @Bit-while_going Рік тому +3

    Amygdala: emotion
    Hippocampus: measurement
    Cortex: sensation
    But i first need to reminisce to appreciate each one, so I have thalamus and hypothalamus left over. Which one do I choose?

    • @cosmictreason2242
      @cosmictreason2242 Рік тому

      Hypothalamus is your hormone control center that governs your endocrine system

  • @haronsantos2456
    @haronsantos2456 3 місяці тому +1

    Thank You, it was a pieace of Art

  • @smilefaxxe2557
    @smilefaxxe2557 Рік тому +10

    The Brain is such an amazingly interesting organ 🧠❤
    And you do a great job at explaining concepts regarding the brain, thank you! 🔥👍

    • @nateshrager512
      @nateshrager512 Рік тому +1

      Yea, it's the most amazing. But then again, look who is telling you that. Might be some bias 😂

    • @smilefaxxe2557
      @smilefaxxe2557 Рік тому

      @@nateshrager512 Well, it might be 😅
      But its always cool to listen to someone who is passionate about his topic 👍

  • @priyanshugoel3030
    @priyanshugoel3030 Рік тому

    What i took from this was,brain stores info into multiple sparsely populated graph like structures, which on co-allocation or co-retrieval are connected by adding some nodes.
    Also neutrons of an experience are well spread apart in the brain, maybe so that, in eventual co-retrieval some neurons can be left to facilitate connections. Also since sparse graphs and planar graphs are easier to traverse, maybe some processes also handle some form of garbage collection aiming at those neurons.

  • @weylin6
    @weylin6 Рік тому +2

    I wonder causes issues like difficulty forming or recalling memories, or why some things are more easily learned? If you find something interesting, it seems to make you more likely to remember it?

  • @mpanganiban
    @mpanganiban Рік тому +2

    Great video! There are also fast-degarding GFP variants to improve the temporal correspondence between gfp signal and gene expression

  • @brunomorini2296
    @brunomorini2296 Рік тому +2

    @ArtemKirsanov Your videos are amazing. Congratulations, how do you make your animations?

  • @fahedriachi
    @fahedriachi Місяць тому

    I am speechless and amazed with the content and presentation, and the insight... Somewhere in my brain, a school of engrams are recruted for this awesome youtube channel❤

  • @qualia765
    @qualia765 Рік тому +2

    Thank you for making these really great videos about a field i would otherwise never be able to learn about.(I have a very strong aversion to anything gory or needles or pictures (or thoughts of pictures) of organs and similar).

  • @vicentefigueroa4758
    @vicentefigueroa4758 Рік тому +3

    Brilliant video, very informative, inspiring, and entertaining! Greetings from a neuroscientist who loves your channel!

  • @aphinion
    @aphinion Рік тому

    Holy fuck, what an amazingly high quality video and explanation. And entirely without useless stock footage but instead graphics that actually enhance what's said. This deserves a lot more followers!

  • @eismccc
    @eismccc 2 місяці тому +1

    You're awesome man great video, I'm in AI and this is right in my wheelhouse. Looking forward to more great content like this!

  • @qualia765
    @qualia765 Рік тому +6

    19:44 does this mean that trying to learn some big topic at the *same time* everyday is more effective then at *random times* everyday?

  • @Corgifunni
    @Corgifunni Рік тому

    Brilliant video, very comprehensible and straight to the point, and minimalistic enough to keep my attention. Definitely worth a sub!

  • @Diego_Cabrera
    @Diego_Cabrera Рік тому +2

    Truly an amazing video. From the content, explanation, and visuals. Keep it up!

  • @VictorHugoVale
    @VictorHugoVale 4 місяці тому +1

    Thanks a lot, this is so usefull to understand !!

  • @borisdorofeev5602
    @borisdorofeev5602 Місяць тому

    The way I'm interpretting this information is that doing things like listening to music or an off-topic audiobook while studying is not optimal.
    Your brain is trying to overlap memories without a sinusoidal property. So rather than the earlier examples it's better to try studying two related topics, with a build up and cool down of interesity.
    After some time, its actually optimal to take a break and force that sine wave to baseline encoding intensity.
    Then after a break build back slowly into learning and don't just dive in. Like work through a simple math problem or think of a good way to put a logical circuit together.
    I will try this route.

  • @squishyushi
    @squishyushi Рік тому

    Last night I was literally googling what memories are physically and like how neurons work, I really would love to learn more about this stuff

  • @repairstudio4940
    @repairstudio4940 Рік тому +1

    This is a top quality production and the information in the field of neuroscience is well explained. Liked. Subbed.

  • @johnmandreik4887
    @johnmandreik4887 Рік тому +2

    hy^^ i wanted to say that i realy like the amount of Information per slide^^ its clean, need and visibl, esay to follow and therefore perfect for lerning!
    Keep it up :)

  • @tommylakindasorta3068
    @tommylakindasorta3068 Рік тому +1

    Fascinating. I love learning about how the brain works.

  • @titusfx
    @titusfx Рік тому +2

    In 7:45, could we just induce a comma so it can form new memories? In any case the tag approach is awesome

  • @inteligenciaartificiuau
    @inteligenciaartificiuau Рік тому +2

    Impressive content! Thanks!

  • @HeduAI
    @HeduAI Рік тому

    This has got to be the coolest video on memories! Thank you.

  • @Dream4rc
    @Dream4rc Рік тому +3

    I wonder if the gathering the results only through fear responses is a practical way of describing something as multifaceted as memory.

  • @richardrobertson1331
    @richardrobertson1331 10 місяців тому

    I have difficulty remaining focused on each specific new thought you present and the direction you chose to adequately cover your message. Too frequently I needed to pause the video and reflect, then I seem to be taken in another direction when I get back to the video. Your visuals and text open too many avenues for my limited thought processes to remain on tract. It reminds me of trying to follow a city map while visiting a foreign country. Getting from point A to point B eliminates exploring all the interesting sights that the side streets may have. Your visuals are superb, text is inspiring, but voice inflection is somewhat unfamiliar. Thank you, Artem, for all your considerable work that this video has displayed.

  • @ilyas_elouchihi
    @ilyas_elouchihi Рік тому

    As a Cognitive Psychology student, your channel has been super helpful to expand my understanding, props to you ❤

  • @nervous711
    @nervous711 Рік тому +6

    23:32 I have a question regarding to the size of engrams. Isn't the size set of engrams for a specific memory fixed? But it seems that the co-retrieval of 2 distinct engrams increases the engrams on both sets.Or because the new linking engrams only contain the linking information, so that doesn't count the original size of the 2 engrams?

    • @ArtemKirsanov
      @ArtemKirsanov  Рік тому +6

      Amazing point, thank you! I also had this very question while I was creating the video, but I'm afraid I don't have a great answer.
      The source paper for this finding ( pubmed.ncbi.nlm.nih.gov/28126819/ ) just reports an "increased overlap" but doesn't compare overall sizes (or I just missed it).
      My intuition is that the "reorganization" would mean that some non-overlapping neurons become excluded from the engram to keep the density constant, while increasing the overlap. But your interpretation with "linking information" is equally plausible 🤔
      If you find the answer, please let me know!

  • @Earthshine256
    @Earthshine256 Рік тому +2

    В этом видео множество удивительного, но больше всего меня поразила мыша, дрожащая в ожидании шока
    (нарисую такую где-нибудь - посмотрим, поможет ли она memory retrieval)

  • @richardrobertson1331
    @richardrobertson1331 10 місяців тому

    I have difficulty remaining focused on each specific new thought you present and the direction you chose to adequately cover your message. Too frequently I needed to pause the video and reflect, then I seem to be taken in another direction when I get back to the video. Your visuals and text open too many avenues for my limited thought processes to remain on tract. It reminds me of trying to follow a city map while visiting a foreign country. Getting from point A to point B eliminates exploring all the interesting sights that the side streets may have. Your visuals are superb, text is inspiring, but voice inflection is somewhat unfamiliar. Thanks for all your considerable work that this video has displayed.

  • @christianlagareslinares3958

    Artem, great work behind this video. Thanks for breaking down complex information and making it more accessible. I'm looking forward to bumping into you at some Neuro meeting in the US!

  • @eplv3432
    @eplv3432 Рік тому +2

    Amazing video! I have never seen such a comprehensive explanation of memory mechanisms. Any suggestions of how to do a PhD in this specific area? Which authors/institutions to look for?

  • @egwars3
    @egwars3 Рік тому

    easily the hardest thing ive forced myself to comprehend even as simple as you made it

  • @KFC15326
    @KFC15326 Рік тому

    Thanks for your effort to share the neuroscience knowledge. greeting from south korea

  • @fallenangel8785
    @fallenangel8785 Рік тому +2

    Best channel on UA-cam ❤

  • @DmitryRomanov
    @DmitryRomanov 8 місяців тому

    Мы скучаем по вам, Артём ❤️
    Вдохновения вам, и удачи в поиске и творчестве и жизни!

  • @1milionlives
    @1milionlives Рік тому +167

    the brain makes machine learning looks like a child toy

    • @Gorulabro
      @Gorulabro Рік тому +33

      And yet. There are so many parallels that pop up in modern ML to concepts in neuroscience. In most cases it's "convergent evolution" -- something that "just worked" for the ML groups -- rather than something copied from nature. Different things are hard / easy for biological / artificial neural networks, but the essence seems to be in the process of being captured.

    • @diadetediotedio6918
      @diadetediotedio6918 Рік тому +32

      @@Gorulabro
      Distant parallels For the most part, most modern neural network architectures are not even really based on how the brain works, and the few that are (such as spiking neural networks) are also relatively distant approximations of how our brains produce the effects we see in reality. The truth is that we are simply far from even coming close to simulating something like this.

    • @Gorulabro
      @Gorulabro Рік тому +27

      @@diadetediotedio6918 My point is exactly that. We don't have to mimic nature to develop similar functionality. Latent representation, sparsified encoding, sequence positional encoding in transformer architectures, all those are high level concepts discussed on this channel that have representations in modern ML. Not one-to-one, because that would be as wastefull as trying to build planes with flapping wings instead of propellors.

    • @diadetediotedio6918
      @diadetediotedio6918 Рік тому +11

      @@Gorulabro
      I don't disagree with you that it's not necessary for us to copy the nature of 1-1 to have similar "functionality". Now I would say that you need to be very careful with your definition of "functionality". Because then, there is no functionality similar to the brain in artificial neural networks if we consider functionality as the set of qualitative experiences that imply a certain general behavior in the system, for example, ANN's are terrible for having several things that would require them to have a qualitative representation of the world and whose functionality in fact cannot be simulated by a computer. On the other hand, we can make excellent mimics of "functionality" in the external sense, something that merely reproduces a desired external behavior, as ChatGPT does with producing texts that appear "intelligent" and aware. There are reasons why we don't make planes with flapping wings as well that go way beyond that, and some birds actually just glide most of the time and just use their wings as a way to lift themselves up, but nobody says planes are simulations of birds and nor that we are functioning like birds. The general similarity of a bird and an airplane is the same as that of a bird and a firearm projectile or a ballistic missile, both are "flying" in some sense, but it doesn't seem to me that it makes sense to say that having this "functionality" similarly let us translate this knowledge into terms of what goes on in birds, as many people try to do by saying that AI's somehow have an inner workings close to what goes on in human consciousness.
      It takes a lot of care to do these analyses, but in terms I don't disagree with you that these are efficient means of approaching something that refers to the intelligent external behavior that we seek to automate.

    • @1milionlives
      @1milionlives Рік тому

      machine learning is about interpolation on a dataset, it can only learn statistically.
      statistical learning is the lowest form of intelligence and is very different from interaction and survival in a real world environment.
      the best state of the art ml model is much stupider than the simplest of bacteria

  • @lucasteo5015
    @lucasteo5015 Рік тому +18

    if we were to implement all these behaviors as agents that will behave and act throughout time, and each neuron is a deep neural net on its own, I think it is possible to replicate a digital artificial human brain. It seems like we have a few puzzles here and there already and as the research goes on, when more and more findings get implemented into code it is definitely doable. The disappointing part of NN right now is that it doesn't get trained like how brain is at the moment, everything is fed into the model and will activate a bunch of nodes unlike what we see here only few highly active neurons is activated, we can use some dropout and stuff, so maybe the dropout signal is out transmitted to neighboring agents shutting down neighbouring agents made of DNNs to make them less active, and maybe each agent can be spawned or despawn to make the entire thing dynamic so say 2 mappings got fired at once new agents spawn in and linked them together etc... and there are more questions ahead like how do you train them and how everything is going to work out in math, it will surely be an interesting research to do.

    • @ShpanMan
      @ShpanMan Рік тому +8

      It's a tragedy that back propagation works as well as it does. Most ML is stuck on this obvious local maximum instead of taking more inspiration from the brain and fix efficiency, lifelong learning, and scalability.

    • @snk-js
      @snk-js Рік тому +1

      as roger penrose said consciousness are computationally impossible and neurons are infinitely difficult to solve, as per we depends on data and energy to keep a simulated consciousness alive we might face the limit of a handcrafted automaton (maybe when quantum computers and negative energy be domesticated in near future we might achieve something as rare as a pure human).

    • @Daniel_Zhu_a6f
      @Daniel_Zhu_a6f Рік тому +3

      you might be interested in the recently conceived "forward-forward" learning mechanism, which is much more neuromorphic and has local parameter updates. doi: 10.48550/arXiv.2212.13345

    • @J_Machine
      @J_Machine Рік тому +3

      ​@@snk-jsPenrose is not a neuroscientist

    • @flambambam
      @flambambam Рік тому +2

      @@snk-js As @user-sq7zm3qt5e said, Penrose is not a specialist in this field. There are good arguments both for and against his claims.

  • @muhammadasadhaider6893
    @muhammadasadhaider6893 Рік тому +2

    Amazing content, thank you!

  • @mikeg1368
    @mikeg1368 Рік тому +10

    It would be interesting to know how much data an engram needs in terms of bytes. And how much memory is available in theory to an average person?

    • @antonystringfellow5152
      @antonystringfellow5152 Рік тому +7

      No bytes at all.
      A byte is a series of eight bits (ones and zeros).
      Neurons don't function with ones and zeros. Neurons are not digital, they're analogue. Synapses are analogue. This gives them a much greater capacity than a transistor in a processor, which is only used as a switch and as such only processes two values (ones and zeros).

      Today's neural networks merely simulate neurons and synapses, digitally. They're a far cry from the real thing.
      Neuromorphic processors that are analogue are being developed by several companies. These emulate rather than simulate neurons and synapses. Very promising technology, which offers such advantages as more computing power for less energy and an inherent ability to continuously learn rather than requiring a resource-hungry training process. Unfortunately, though, the size and density of the components are currently nowhere near a match for the latest GPUs, such as those produced by Nvidia. This is probably why Nvidia is showing no interest in developing its own yet - it's doing a great job with GPUs.

    • @casey-gt8nl
      @casey-gt8nl Рік тому

      @@antonystringfellow5152ANALOG IS SWINGING BACK BABY!!!!!!!

    • @TheRyulord
      @TheRyulord Рік тому +4

      @@antonystringfellow5152 Bits are a unit of measurement for information/uncertainty, not just some detail of how computers work. You can quantify the amount of information needed to describe any physical system as being some number of bits.

    • @AkiraKurai
      @AkiraKurai Рік тому +1

      ​@TheRyulord bits are by definition binary, you can not encode analog data unless you are fine with losing raw information and then creating an interpreter to guess what was the actual raw information, take any analog wave and transform it into a digital wave.

    • @TheRyulord
      @TheRyulord Рік тому +2

      ​@@AkiraKurai You don't lose any information. Look up "Bekenstein bound". All physical systems, including analog electronics, can be losslessly described by a string of bits.

  • @mancer6322
    @mancer6322 Рік тому

    I'm not sure if you could see this commet. I am a Chinese computational neuroscience student, and I'm really inspired by your series of video explaining neuroscience. I was wondering if you'd be cool with me translating your videos and sharing them on a Chinese video platform with Chinese subtitles?

  • @walteralter1686
    @walteralter1686 Рік тому

    Really good, gradient pedagogy, emphasis on clarity. I'll check out the rest of your channel.
    Smarter-faster.

    • @Sonicdude3
      @Sonicdude3 Рік тому

      This is the REAL NEWS I subscribed for!

  • @anastassiya8526
    @anastassiya8526 Рік тому

    the video is a brilliant work, the structure of the material is perfectly designed to understand it to the fullest. Thank you! It inspires me to get a master in CS even more!

  • @SuperKirby_Gaming
    @SuperKirby_Gaming Рік тому +1

    Thank you for this video!

  • @natalies9829
    @natalies9829 Рік тому

    thank you so much for this video! it offers so much invaluable information that is easily broken down with analogies and detailed visuals. keep up the great work, i always learn something so interesting with each one of your videos!

  • @physiologic187
    @physiologic187 Рік тому +9

    Hi there, I had a question. Why aren't memories overwritten / replaced during co-allocation?
    To me it seems that when stimulus 2 occurs (within 6 hrs of stimulus 1), the memory associated with stimulus 1 should be replaced with the memory associated with stimulus 2 (since the same neurons, who's excitability is the highest outcompete the neighboring neurons for storing that memory in that engram). Or is it possible for a single engram to host multiple memories?

    • @Youtuberboi596
      @Youtuberboi596 Рік тому +2

      From my understanding of Artificial neural networks, neural nets are really good at reusing connections and neurons to store different information, basically they are awesome at compression of all sorts of data because each neuron is tuned to be able to be used in different pathways and play a different role in each pathway. So I think, in organisms, new similar memories is just new surrounding neurons participating and all the previous involved neurons adjusting to accommodate most important memories + new memories (which may be similar). I think this is why we start to forget old stuff when we learn new stuff, but we can revive that memory easily by a little relearning. It's just neurons trying to optimally compress information. Just a theory tho, I might be wrong.
      Btw, look at 20:42, you can see 2 new neurons involved due to the second stimulus

    • @Youtuberboi596
      @Youtuberboi596 Рік тому +3

      Also, there is this thing I heard in machine learning circles, where when learning new stuff, neural networks forget old stuff to a high degree, unlike humans. So the idea was to learn new + old stuff again, so everything is in the dataset. This idea was inspired by the idea that maybe during sleep, brains learn old and new stuff together to avoid forgetting old stuff

    • @physiologic187
      @physiologic187 Рік тому

      ​@@UA-camrboi596 Thank you 🙏

    • @chri-k
      @chri-k Рік тому +1

      To add:
      Neurons simply a lot more complex than an ANN node, and can do a lot of things with their inputs besides just adding them ( like XOR and integration over time )
      This allows them to be reused even more efficiently

  • @sbhtta
    @sbhtta 8 місяців тому

    This is such a fantastic video! 😄 Thank you so much for your effort in presenting these topics so beautifully.❤

  • @TacticalPecans
    @TacticalPecans Рік тому +2

    Another top tier video.
    I’m curious how/if these mechanisms are influenced by, or vary in, brains with PTSD, addiction, or other maladaptive tendencies (ie: the relationship between PTSD and engram formation and linking, for example). Would we see larger engrams with more overlapping neurons? Less optimized neuronal selection and encoding?
    Thank you for the amazing content, as always. You’ve left me with much to think about and research!

  • @pauljones9150
    @pauljones9150 Рік тому +2

    Great video as always 🎉