Elegant Geometry of Neural Computations

Поділитися
Вставка
  • Опубліковано 10 гру 2024

КОМЕНТАРІ • 128

  • @ArtemKirsanov
    @ArtemKirsanov  3 дні тому +7

    To try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/ArtemKirsanov . You’ll also get 20% off an annual premium subscription

    • @joaoGabriel_3
      @joaoGabriel_3 16 годин тому

      Artem, first of all, what a great video! I love the animations and how you are able to make this topic so intuitive!
      I would like to chat with you about an idea I'm developing about creating a new physics-derived mathematical model of a neuron's physiology in 3 dimensions.
      Your help would be greatly appreciated!
      Can I contact you in any way?

  • @JackDespero
    @JackDespero 2 дні тому +29

    I am not a neurologist, but as a physicist I really enjoyed this and your previous video. It is always a great feeling to gather new knowledge.

  • @I_am_who_I_am_who_I_am
    @I_am_who_I_am_who_I_am 3 дні тому +36

    The timing of this one is impeccable.

  • @tau9632
    @tau9632 3 дні тому +93

    The animations are so gorgeous. This is what I always dreamed of as a kid - I wanted to *see* into things and see their hidden structures and dynamics.

    • @ArtemKirsanov
      @ArtemKirsanov  3 дні тому +4

      Thank you!!

    • @spiralsun1
      @spiralsun1 2 дні тому +1

      Absolutely 😊❤

    • @jm3279z3
      @jm3279z3 2 дні тому +2

      What software is being used? The designs are very beautiful!

    • @keylime6
      @keylime6 2 дні тому +2

      @@ArtemKirsanov Is this video made with Manim?

    • @keylime6
      @keylime6 2 дні тому +1

      @@jm3279z3 I'm not 100% sure, but I think it's made with Manim, a python library made by youtuber @3blue1brown for making videos visualizing math.

  • @GeoffryGifari
    @GeoffryGifari 2 дні тому +10

    Some questions:
    1. If the history of past inputs is crucial, how far back (in seconds) does it still matter?
    Can the input let's say 10 seconds ago still matter for the neuron's output?
    2. Will anything interesting happen if the external current is periodic?
    3. After a neuron's state enters a limit cycle, how can it escape? Surely that repetitive firing can't be sustained forever, especially if nutritional requirements are considered
    4. What kinds of new feature would arise if this "memory effect" is incorporated into artificial neural networks?

  • @anton9690
    @anton9690 3 дні тому +86

    That meta-revelation when realizing that an aggregation of billions of neurons like these enter in those multidimensional manifolds to understand themselves, to write that book about themselves, to create this amazing video about themselves, etc. ❤

    • @erawanpencil
      @erawanpencil 3 дні тому +1

      I've always found in strange that phase space diagrams often resemble neurons themselves.

    • @ruudh.g.vantol4306
      @ruudh.g.vantol4306 3 дні тому +1

      Don’t underestimate these allies within us:
      m.ua-cam.com/video/SbvAaDN1bpE/v-deo.html

    • @jakublizon6375
      @jakublizon6375 2 дні тому

      ​@@erawanpencil Well because it is all signals. Encoding information into anything that oscillates is really easy.

  • @MultiNeurons
    @MultiNeurons 3 дні тому +15

    Marvellous, the lanscape of neuronal dynamics opens up and shows off its fundamental secrets. Thankyou and thanks to Mr. Izhikevich for his studies.

  • @AgusVallejoV
    @AgusVallejoV 2 дні тому +8

    15:05 I was just thinking it'd be really funny if the two points just outright exploded upon merging, but thought nah, this video seems really serious to do something silly like that. Had to laugh out loud when the explosion really came in.
    Keep up the good work, really really nice visualizations!

    • @ArtemKirsanov
      @ArtemKirsanov  2 дні тому +1

      Thank you!!
      I thought the same thing when editing the video, so I added the explosion :D

  • @emiel2712
    @emiel2712 3 дні тому +10

    Wow I'm impressed at how fast you are making videos with this level of animation and editing. Great work, hope you don't get burnt out though like I've seen some other youtubers do from the pressure of wanting to satisfy their audience. Maybe you've gotten efficient in your workflow so it doesn't take that much time. Anyway, cool video.

  • @anastassiya8526
    @anastassiya8526 2 дні тому +2

    the video is really helpful for understanding the big picture behind these diff equations

  • @Faptimus420
    @Faptimus420 День тому +1

    I have recently finished my MSc in Computer Science, with a focus on data science, and a particular interest on deep learning. While I do find those interesting too, I have, as an extracurricular activity, spent time learning about spiking neural networks and neuromorphic computing, as I find these more biologically-plausible models much more fascinating than the "run-of-the-mill" Hebbian/rate-coding-based models.
    While there are many educational videos on the former, intended to help you with visualizing their behaviors, for the likes of 3b1b and many others, there is a severe lack of videos on the latter, and all the learning I've done had to rely only on textbooks and papers.
    I'd like to thank you for spending so much time on righting that terrible wrong with videos of such high quality, and allowing me and others to gain a much better intutiotion into these topics.

  • @sepro5135
    @sepro5135 2 дні тому +1

    I just read a Book about system dynamics (on the theoretical math side of things), it’s always stunning to me how beautiful dynamic systems described only by ODEs are, second only to the amazing results of PDEs, mainly the NSE

  • @PabloMayrgundter
    @PabloMayrgundter 8 годин тому

    This is the best applied dynamics presentation I've seen in video, text, books.. anywhere.
    Kudos!

  • @guzzagrizzly372
    @guzzagrizzly372 3 дні тому +8

    Yay! This is great!
    Have you ever considered doing a video topic on Active Inference? Seems like a cool topic to add visualisation to how statistical physics combines with theoretical neuroscience.

    • @ArtemKirsanov
      @ArtemKirsanov  3 дні тому +8

      Yes!! In fact, the idea about Active Inference started the mini-series on Hopfield nets, Boltzmann machines and Cross-Entropy, as stepping stones for that.
      I’ll for sure make the active inference video at some point in future, currently I’m still doing background literature mining to understand it myself :D

    • @gabberwhacky
      @gabberwhacky 7 годин тому +2

      ​@@ArtemKirsanovNice! I'd also be interested in active inference and related topics like Fristons free energy principle!

  • @MrDNWave
    @MrDNWave 3 дні тому +3

    Wow, this was an amazing production with its own uniqueness of presentation. I wish at times you were a little bit slower when presenting key concepts or gave more variations to understand them, just to be able to savor them better.

  • @PackMowin
    @PackMowin 3 дні тому +5

    These video drops make my week

  • @keylime6
    @keylime6 2 дні тому +2

    I barely understand anything going on in this video but it's interesting as hell

  • @aschroed
    @aschroed 3 дні тому +5

    Love these videos, I'm currently taking a class called Complex Adaptive Systems where we simulate these dynamical systems and visualise them in the phase plane! Super cool

  • @tarunkumar1091
    @tarunkumar1091 3 дні тому +2

    Thank you so much Artem,I can't explain how much value your videos have added to my life.

  • @PeacefulAnxiety
    @PeacefulAnxiety 2 дні тому

    I respect your clarity and professionalism in these topics.

  • @666shemhamforash93
    @666shemhamforash93 3 дні тому +3

    Phenomenal series Artem! Genuinely impressive work. Please do RNNs next! 🤞

  • @atomicgeneral
    @atomicgeneral 13 годин тому

    @13:03 : "under the same current": Is the current visible anywhere in the state diagram? Does it correspond to a magnitude of (horizontal) perturbation? If so, that would make me understand your comment: small perturbations cause the state to return to the stable equilibrium point; larger perturbations cause it go into cycling behavior.

  • @Xanoxis
    @Xanoxis 2 дні тому

    Fascinating stuff. It gives a rough idea how some kinds of learning emerges in neurons and body, when those patterns change as you exercise, tuning them better.

  • @davidhand9721
    @davidhand9721 3 дні тому +11

    As a technicality, yes, there _is_ a biophysical and biochemical difference between two neurons with different histories, so the first statement of the video is sort of wrong. Two _identical_ neurons, down to the atom, given identical inputs, _will_ give you the same outputs, because the history of the neuron is stored in its biochemical state, environment, connections, and other cellular variables. I get that you're trying to introduce a computational topic, but it's important to remember that a computational neuron is much, much simpler than a real one because real neurons have an extremely rich, high-dimensional internal state. Not trying to be negative, I'm a big fan of your work.

    • @drdca8263
      @drdca8263 2 дні тому +1

      Didn’t he say *visually* identical, or something like that? I thought I remembered something that specifically implied that he didn’t mean atom-for-atom identical, but just like, “the same kind of neuron, without any visual differences in like, the length of the axons or dendrites etc.”

    • @PeppoMusic
      @PeppoMusic День тому

      ​@@drdca8263 That would mostly be the biophysical state I'd say. In contrast with the biochemical state, including environmental biochemical factors, which are not fully observable (as they exist at the limit of what is directly observable). This is also where quantum effects can get started to get more involved, so things get a lot more difficult to model and observe.

  • @vastabyss6496
    @vastabyss6496 2 дні тому

    I hope that advances in neuroscience will allow us to build more efficient artificial neural networks and neuromorphic computers. Thanks for sharing all this cool knowledge with the world!

  • @kgblankinship
    @kgblankinship 2 дні тому +1

    There are a class of artificial neural networks that have internal memory, known as recurrent networks. There is a sizable body of theory of these structures. An important subset of this theory is that of Content Addressable Memory.
    What's good about this method is that it marks off regions in the state space where the behavior is either stable or tends toward a limit cycle. One would think that these could be specified by formulas taken from the differential equations.
    This work is similar to that which was done in aerospace vehicle flight mechanics during the 1980s. But there are larger questions that beg for answers: What are the feedback mechanisms in a neuron that are associated with learning? How are individual neurons assigned to a given function? And one more immediate to Artem's work: How stable (repeatable) is the operation of a neuron once it's learned how to support a given function? How does learning affect this picture? Also, how do the H-H equations tie into learning and memory?

  • @justblank0
    @justblank0 3 дні тому +1

    The analysis of phase planes is fascinating. It seems to rekindle an exploratory feeling inside me, similar to data exploration via regular graphs. Wondering if this is something I can apply in my day-to-day.

    • @John-c4r1o
      @John-c4r1o 3 дні тому +2

      As a kid at junior high (year 8) our overly qualified and very elderly math teacher explained how parameters in engineering were shown on coordinate graphs and thus in the old days solutions for aircraft dynamics sat within the intersectional area of multi paramters. I've used that thought process a lot in my life to very good effect.

  • @potatoonastick2239
    @potatoonastick2239 2 дні тому

    Absolutely mind-blowing! These videos are public service educational MASTERPIECES. Never stop teaching you absolute legend

  • @pirminborer625
    @pirminborer625 19 годин тому

    Would have been a nice to have to visualise gradients as slope of a 3d surface. Didn't know anything about neuro dynamics. Very interesting and well explained. One can only start grasping the complex behaviour of linking different types of neurons together and how that changes their activation threshold and patterns 🤯

  • @drdca8263
    @drdca8263 2 дні тому +2

    Very nice! I learned a lot :)
    I wonder, if you took the whole 4D dynamics, if you simulated the behavior while having a variety of different external current sources, would the points in the 4D space mostly stay around a particular 2D surface in that 4D space?
    Well, I suppose if the system can be well approximated with only one of those 3 channels, then the answer sort of has to be yes, but, I guess I mean…
    something like “If you wanted to be a little more precise than the version with voltage and one ion channel, but still wanted to stick to a 2D space, could you do better by picking a surface which is slightly deformed compared to the one being used here?” .
    Though, I guess with how the m channel fraction is set to m_\infty , that is already picking a somewhat complicated surface in the 4D space, rather than just setting 2 of the variables to constants.
    I guess what I was thinking was like,
    “if you set an auto-encoder which tried to encode (V,n,m,h) as (V,x) , and decode back to (V,n,m,h) with minimal error, and where V in the encoding is forced to be the same as the V that is encoded, would the best encoding for this be much different that x=n ?”
    This is a very cool video, thank you

  • @atomicgeneral
    @atomicgeneral 13 годин тому

    Wonderful. Some things that weren't clear: @11:10 : Saddle pt trajectories: they all seem to be getting pushed AWAY (not some towards, others away). @11:30 : why is there a gap in the separatrix (white line) at the bottom? @11:59 The limit cycle and separatrix both seem to be trajectories? I dont see what is the difference between them: is the separatrix a geometric trajectory ? In that case, how is the limit cycle something distinct from the separatrix?

  • @Patapom3
    @Patapom3 2 години тому

    Super interesting and amazingly produced! Congrats!

  • @jafetriosduran
    @jafetriosduran 2 дні тому

    We could probably use a state space with fractional derivatives, because these have the property of requiring the entire past to calculate a new state, this unlike a classical state space, where if two state trajectories reach the same value and at that moment become zero, the next instant the answer will be the same, however in a fractional space, if two state trajectories reach the same value and become zero again, the trajectories at the next instant will be different, this due to the history effect of the Riemann Liouville integral. A pseudo fractional space is actually a state space of integer order with an uncountable and infinite number of states, that is, it has a distributed variable state.

  • @WCKEDGOOD
    @WCKEDGOOD 2 дні тому

    Very nicely done,it makes me want to study neuronal phasespaces for days.

  • @Apodeipnon
    @Apodeipnon 2 дні тому

    You are brilliant at explaining and animating these things

  • @Filup
    @Filup 3 дні тому +4

    I am studying a double BA in mathematics and computer science. I studied dynamical systems of ODEs. Is that a good enough requirement for that book? Or will I need some kind of understanding in biology (which is zero haha)?

    • @ArtemKirsanov
      @ArtemKirsanov  3 дні тому +5

      Absolutely! The book is really self-contained explaining all necessary background (including electrophysiology in chapter 2)

  • @CaarabaloneDZN
    @CaarabaloneDZN 2 дні тому

    I am making my thesis in fluid simulation, and its imcredible how similar these topics are. It feels like I'm watching a video on fluid mechanics but with another skin lol

  • @tau9632
    @tau9632 3 дні тому +1

    Another epic video - well done mate. It is absolutely worthy of your tattoo.

  • @JasonCummer
    @JasonCummer 12 годин тому

    Reminds me of the tipping points of climate science.
    Any loving your neuroscience videos. Helping me get back into the space. Students have such great resources today.

  • @smdgch
    @smdgch 3 дні тому +3

    Тёма, харош!

  • @TheFinalFrontiersman
    @TheFinalFrontiersman 3 дні тому +1

    As someone who loves the idea of being a cyborg, the idea that we understand neurons on this level is so cool... On the other hand, it's terrifying to see how much computational power each individual neuron has when we have almost 90 billion to deal with!!!

    • @Curiosiate_
      @Curiosiate_ 3 дні тому

      As someone who loves being a cyborg (via basic means) we aren't even close to pushing the limits of our brains I think. We are hungry for structure, meaning, and use any of it we can to abstract, predict, and navigate reality and ourselves.

  • @coffeeicecubes2419
    @coffeeicecubes2419 2 дні тому

    great intro to dynamical systems tbh

  • @renerekers9158
    @renerekers9158 3 дні тому

    wonderful. Hughe thanks for this excellent explanation!

  • @phdnk
    @phdnk День тому

    I can't help but think about heart rhythm neurons more than about brain neurons. About fibrillation, arrhythmia, cardiac arrest, defibrillation etc

  • @GeoffryGifari
    @GeoffryGifari 2 дні тому +1

    Oh and not exactly about the video topic, but still neuroscience: Is human memory encoded just in the physical geometry of neurons connected with each other, or does the firing pattern of neurons also matter in determining what we remember?

    • @GeoffryGifari
      @GeoffryGifari 2 дні тому +1

      In other words, should a network of neurons keep firing to be able to store memory? and should the neurons fire in a consistent, specific way for the encoded memory to be unchanged?

    • @PeppoMusic
      @PeppoMusic День тому +1

      Both, IIRC. Since "information" can be effectively stored in any kind of "state" of sufficiently stable configurable properties (at any level), including continuous firing patterns, various biochemical balances and electronegativity. But the distinction might be that of short term and long term memory (however "biophysical"changes like the opening and closing of ion channels kind of bridge the gap, in stability and being chemical and physical).

  • @icandreamstream
    @icandreamstream 3 дні тому +4

    Let’s go 🔥🙌🏻

  • @thiagoborduqui
    @thiagoborduqui 3 дні тому +1

    Fantastic video!

  • @prasadkandra
    @prasadkandra День тому

    Another Nobel grade research video ❤
    All the best 💐

  • @SasskiaLudin
    @SasskiaLudin 2 дні тому

    To address current LLMs shortcoming one needs a more potent mathematical modelling toolset. One way to pursue is to extend backpropagation beyond the Euclidian space for more subtle geometries able to tackle higher NN feature space, so beyond complex numbers and even quaternions but leveraging on Clifford algebras. At least this is my personal journey.

  • @JohnbelMahautiere
    @JohnbelMahautiere 3 дні тому

    Merci paske pout cet direction

  • @charliesteiner2334
    @charliesteiner2334 День тому

    As the great Baba Brinkman would say "once you bust them up into pieces / It’s tough to go back, ‘cause... hysteresis"

  • @marcgehring9530
    @marcgehring9530 3 дні тому

    Beautiful animations!

  • @BlueBirdgg
    @BlueBirdgg 3 дні тому

    Very interesting videos. Thanks!

  • @EkShunya
    @EkShunya 3 дні тому +1

    i just had my mind blown

  • @randomchannel-px6ho
    @randomchannel-px6ho День тому

    This why I went to school wanting to study ML but ended up studying math and geometry

  • @betterlifeexe4378
    @betterlifeexe4378 2 дні тому +1

    learning about this and hopfield networks in the context of ai at the same time is breaking my brain, which is ironic.

  • @potatoonastick2239
    @potatoonastick2239 2 дні тому

    These videos are an autodidacts dream

  • @tau9632
    @tau9632 3 дні тому +1

    Now I also understand just how different in-silico neural nets are from the OG biological neural nets - they completely miss all this time dynamic stuff. And who knows that implications that has for its reasoning/thinking/existing as a consciousness abilities....

    • @drdca8263
      @drdca8263 2 дні тому +1

      There are people working on “spiking neural nets” which try to imitate the spiking behavior to some extent, but I don’t know how closely they match the behavior here, which I certainly didn’t know about.
      (I found this video to be quite informative)

  • @nananou1687
    @nananou1687 2 дні тому

    Can you please do a video on neuromorphic computing

  • @andrepenteado
    @andrepenteado 3 дні тому

    Love your videos!

  • @Pedritox0953
    @Pedritox0953 3 дні тому

    Great video! Peace out!

  • @ShpanMan
    @ShpanMan 2 дні тому

    Such stunning visualizations of 2D to 1D projections. Humans are so blind, we have a tiny viewing port into the world, no wonder we are so ignorant.
    Hopefully AI can do the more complex thinking for us.

  • @franh9833
    @franh9833 День тому

    What is your opinion on the Bienenstock-Cooper Munro theory?

  • @marcomonti5758
    @marcomonti5758 3 дні тому

    Very interesting!!

  • @sebstr8382
    @sebstr8382 2 дні тому

    Great video yet again!
    There is just one thing that i didnt understand.
    The effect of the 4 types made sense to me, in how they interact and output due to a current. But im not sure i understand how they relate to each other, is there some variable that effect which one of the 4 types a neuron will be?

    • @ArtemKirsanov
      @ArtemKirsanov  2 дні тому

      Thanks!
      Which particular kind is realized is determined by the biophysical parameters - how fast the channels are opening, what their voltage-dependence looks like, the value of reversal potential for different ions, etc

  • @banzaipiegaming
    @banzaipiegaming 23 години тому

    How well does this model scale? Is there an upper limit to the number of neurons before accuracy loss is statistically significant? I know nothing about modelling in neurology

  • @Johnnius
    @Johnnius 3 дні тому +2

    Do you happen to publish the animations as open-source? I would like to play with the dynamical system on my own, without having to code the entire thing.

    • @ArtemKirsanov
      @ArtemKirsanov  2 дні тому +2

      Just uploaded the code to Github :)
      github.com/ArtemKirsanov/UA-cam-Videos/tree/main/2024/Elegant%20Geometry%20of%20Neural%20Computations

    • @Julian-tf8nj
      @Julian-tf8nj 2 дні тому

      Ditto here! :)

  • @GeoffryGifari
    @GeoffryGifari 2 дні тому +1

    Wait wait... I don't really get how the phase space having a "basin of attraction" bounded by a separatrix ends up developing hysteresis/memory

    • @GurjinderSinghRathore
      @GurjinderSinghRathore 2 дні тому +1

      The neuron once pushed into the limit cycle stays on the boundary of the limit cycle. That means that its current state is determined by something that happened in the past, I.e. the initial push that sent it into the limit cycle. That is the memory of the neuron; 1-bit of memory that remembers the push.

  • @raiso9759
    @raiso9759 2 дні тому

    Amazing!!!

  • @cavesalamander6308
    @cavesalamander6308 2 дні тому

    (Artem, phase [feɪz] and face [feɪs] are different words with different pronunciation. The subtitle generator is confused with a 'face portrait'.)

  • @jaybestemployee
    @jaybestemployee 2 дні тому

    Imagine AI businesses claiming human level intelligence achievable by bunch of much simpler simulated neurons (the nodes in neural networks) in a much simpler structure (the AI model) than our biological brains. How real/possible could those claims be after watching this video and realizing how complex the dynamics of even a single neuron can be.

  • @lucaferlisi2486
    @lucaferlisi2486 2 дні тому

    Is synaptic plasticity involved in changing neuronal dynamics?

  • @rafa_br34
    @rafa_br34 3 дні тому

    That's awesome.

  • @guidosalescalvano9862
    @guidosalescalvano9862 2 дні тому

    Are neurons often on the boundaries of different "phase space types" (like you described in the video you made on Ising models)?

  • @gayan9121
    @gayan9121 3 дні тому

    Hi Artem,
    Animations are sick! What tools do you use for these animations?
    Manim, Illustrator and Python?

    • @ArtemKirsanov
      @ArtemKirsanov  2 дні тому +1

      Thank you!
      Yep, all three! Mostly creating individual animation video files in matplotlib, and then composing them in after effects

  • @Noneofyourbusiness2000
    @Noneofyourbusiness2000 3 дні тому

    Neuronal Dynamics
    Neuronal Network
    Neural Network
    Neural Dynamics

  • @maxe624
    @maxe624 3 дні тому

    Can you define the phase space using only the nullclines?

  • @azharalibhutto1209
    @azharalibhutto1209 2 дні тому

    Great ❤❤❤

  • @davidhand9721
    @davidhand9721 2 дні тому

    So is there evidence that these resonators are involved with what you've previously described as phase-sensitive neurons in the hippocampus? I remember I was having a difficult time imagining a configuration of more conventional integrator neurons that could lead to such a behavior in a robust way.

  • @JorgeSantos-uw3gk
    @JorgeSantos-uw3gk 2 дні тому

    Mealey machines essentially

  • @sean_vikoren
    @sean_vikoren 2 дні тому

    world class

  • @surajsamal4161
    @surajsamal4161 3 дні тому

    bro can you make a batchnorm expalnation ?

  • @luisisaurio
    @luisisaurio 3 дні тому

    So SupH and SNIC are a mathemathical model for seizures?

  • @CopperKettle
    @CopperKettle Годину тому

    Спасибки

  • @sum_andres31
    @sum_andres31 3 дні тому

    Can I do computational neuroscience if I major in psychology? I love the content but I wish I could understand all the math :(

  • @djallalnamri1
    @djallalnamri1 2 дні тому +1

    the heart also has neurons .

  • @rabia1180
    @rabia1180 День тому

    can networks of neurons have these properties too?

  • @BooleanDisorder
    @BooleanDisorder 2 дні тому

    You're handsome and smart. Really like the channel! Neurons are fascinating.

  • @pcbenutzer6651
    @pcbenutzer6651 2 дні тому

    7 47 does this video needs a background in neuroscience ?

  • @duracell211
    @duracell211 3 дні тому +1

    it would be nice and informative, how to code all this with python. step by step...

  • @Prod.Tellerbeats
    @Prod.Tellerbeats 2 дні тому

    1,000th like 🎉😎

  • @dann_y5319
    @dann_y5319 3 дні тому

    Wow.

  • @TeslaElonSpaceXFan
    @TeslaElonSpaceXFan 3 дні тому +3

  • @97y59j
    @97y59j 3 години тому

    I maybe a coincidence but those diagrams look lile fingerprints for me.

  • @aksamitnaPiesc
    @aksamitnaPiesc День тому

    yut