To try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/ArtemKirsanov . You’ll also get 20% off an annual premium subscription
Artem, first of all, what a great video! I love the animations and how you are able to make this topic so intuitive! I would like to chat with you about an idea I'm developing about creating a new physics-derived mathematical model of a neuron's physiology in 3 dimensions. Your help would be greatly appreciated! Can I contact you in any way?
The animations are so gorgeous. This is what I always dreamed of as a kid - I wanted to *see* into things and see their hidden structures and dynamics.
Some questions: 1. If the history of past inputs is crucial, how far back (in seconds) does it still matter? Can the input let's say 10 seconds ago still matter for the neuron's output? 2. Will anything interesting happen if the external current is periodic? 3. After a neuron's state enters a limit cycle, how can it escape? Surely that repetitive firing can't be sustained forever, especially if nutritional requirements are considered 4. What kinds of new feature would arise if this "memory effect" is incorporated into artificial neural networks?
That meta-revelation when realizing that an aggregation of billions of neurons like these enter in those multidimensional manifolds to understand themselves, to write that book about themselves, to create this amazing video about themselves, etc. ❤
15:05 I was just thinking it'd be really funny if the two points just outright exploded upon merging, but thought nah, this video seems really serious to do something silly like that. Had to laugh out loud when the explosion really came in. Keep up the good work, really really nice visualizations!
Wow I'm impressed at how fast you are making videos with this level of animation and editing. Great work, hope you don't get burnt out though like I've seen some other youtubers do from the pressure of wanting to satisfy their audience. Maybe you've gotten efficient in your workflow so it doesn't take that much time. Anyway, cool video.
I have recently finished my MSc in Computer Science, with a focus on data science, and a particular interest on deep learning. While I do find those interesting too, I have, as an extracurricular activity, spent time learning about spiking neural networks and neuromorphic computing, as I find these more biologically-plausible models much more fascinating than the "run-of-the-mill" Hebbian/rate-coding-based models. While there are many educational videos on the former, intended to help you with visualizing their behaviors, for the likes of 3b1b and many others, there is a severe lack of videos on the latter, and all the learning I've done had to rely only on textbooks and papers. I'd like to thank you for spending so much time on righting that terrible wrong with videos of such high quality, and allowing me and others to gain a much better intutiotion into these topics.
I just read a Book about system dynamics (on the theoretical math side of things), it’s always stunning to me how beautiful dynamic systems described only by ODEs are, second only to the amazing results of PDEs, mainly the NSE
Yay! This is great! Have you ever considered doing a video topic on Active Inference? Seems like a cool topic to add visualisation to how statistical physics combines with theoretical neuroscience.
Yes!! In fact, the idea about Active Inference started the mini-series on Hopfield nets, Boltzmann machines and Cross-Entropy, as stepping stones for that. I’ll for sure make the active inference video at some point in future, currently I’m still doing background literature mining to understand it myself :D
Wow, this was an amazing production with its own uniqueness of presentation. I wish at times you were a little bit slower when presenting key concepts or gave more variations to understand them, just to be able to savor them better.
Love these videos, I'm currently taking a class called Complex Adaptive Systems where we simulate these dynamical systems and visualise them in the phase plane! Super cool
@13:03 : "under the same current": Is the current visible anywhere in the state diagram? Does it correspond to a magnitude of (horizontal) perturbation? If so, that would make me understand your comment: small perturbations cause the state to return to the stable equilibrium point; larger perturbations cause it go into cycling behavior.
Fascinating stuff. It gives a rough idea how some kinds of learning emerges in neurons and body, when those patterns change as you exercise, tuning them better.
As a technicality, yes, there _is_ a biophysical and biochemical difference between two neurons with different histories, so the first statement of the video is sort of wrong. Two _identical_ neurons, down to the atom, given identical inputs, _will_ give you the same outputs, because the history of the neuron is stored in its biochemical state, environment, connections, and other cellular variables. I get that you're trying to introduce a computational topic, but it's important to remember that a computational neuron is much, much simpler than a real one because real neurons have an extremely rich, high-dimensional internal state. Not trying to be negative, I'm a big fan of your work.
Didn’t he say *visually* identical, or something like that? I thought I remembered something that specifically implied that he didn’t mean atom-for-atom identical, but just like, “the same kind of neuron, without any visual differences in like, the length of the axons or dendrites etc.”
@@drdca8263 That would mostly be the biophysical state I'd say. In contrast with the biochemical state, including environmental biochemical factors, which are not fully observable (as they exist at the limit of what is directly observable). This is also where quantum effects can get started to get more involved, so things get a lot more difficult to model and observe.
I hope that advances in neuroscience will allow us to build more efficient artificial neural networks and neuromorphic computers. Thanks for sharing all this cool knowledge with the world!
There are a class of artificial neural networks that have internal memory, known as recurrent networks. There is a sizable body of theory of these structures. An important subset of this theory is that of Content Addressable Memory. What's good about this method is that it marks off regions in the state space where the behavior is either stable or tends toward a limit cycle. One would think that these could be specified by formulas taken from the differential equations. This work is similar to that which was done in aerospace vehicle flight mechanics during the 1980s. But there are larger questions that beg for answers: What are the feedback mechanisms in a neuron that are associated with learning? How are individual neurons assigned to a given function? And one more immediate to Artem's work: How stable (repeatable) is the operation of a neuron once it's learned how to support a given function? How does learning affect this picture? Also, how do the H-H equations tie into learning and memory?
The analysis of phase planes is fascinating. It seems to rekindle an exploratory feeling inside me, similar to data exploration via regular graphs. Wondering if this is something I can apply in my day-to-day.
As a kid at junior high (year 8) our overly qualified and very elderly math teacher explained how parameters in engineering were shown on coordinate graphs and thus in the old days solutions for aircraft dynamics sat within the intersectional area of multi paramters. I've used that thought process a lot in my life to very good effect.
Would have been a nice to have to visualise gradients as slope of a 3d surface. Didn't know anything about neuro dynamics. Very interesting and well explained. One can only start grasping the complex behaviour of linking different types of neurons together and how that changes their activation threshold and patterns 🤯
Very nice! I learned a lot :) I wonder, if you took the whole 4D dynamics, if you simulated the behavior while having a variety of different external current sources, would the points in the 4D space mostly stay around a particular 2D surface in that 4D space? Well, I suppose if the system can be well approximated with only one of those 3 channels, then the answer sort of has to be yes, but, I guess I mean… something like “If you wanted to be a little more precise than the version with voltage and one ion channel, but still wanted to stick to a 2D space, could you do better by picking a surface which is slightly deformed compared to the one being used here?” . Though, I guess with how the m channel fraction is set to m_\infty , that is already picking a somewhat complicated surface in the 4D space, rather than just setting 2 of the variables to constants. I guess what I was thinking was like, “if you set an auto-encoder which tried to encode (V,n,m,h) as (V,x) , and decode back to (V,n,m,h) with minimal error, and where V in the encoding is forced to be the same as the V that is encoded, would the best encoding for this be much different that x=n ?” This is a very cool video, thank you
Wonderful. Some things that weren't clear: @11:10 : Saddle pt trajectories: they all seem to be getting pushed AWAY (not some towards, others away). @11:30 : why is there a gap in the separatrix (white line) at the bottom? @11:59 The limit cycle and separatrix both seem to be trajectories? I dont see what is the difference between them: is the separatrix a geometric trajectory ? In that case, how is the limit cycle something distinct from the separatrix?
We could probably use a state space with fractional derivatives, because these have the property of requiring the entire past to calculate a new state, this unlike a classical state space, where if two state trajectories reach the same value and at that moment become zero, the next instant the answer will be the same, however in a fractional space, if two state trajectories reach the same value and become zero again, the trajectories at the next instant will be different, this due to the history effect of the Riemann Liouville integral. A pseudo fractional space is actually a state space of integer order with an uncountable and infinite number of states, that is, it has a distributed variable state.
I am studying a double BA in mathematics and computer science. I studied dynamical systems of ODEs. Is that a good enough requirement for that book? Or will I need some kind of understanding in biology (which is zero haha)?
I am making my thesis in fluid simulation, and its imcredible how similar these topics are. It feels like I'm watching a video on fluid mechanics but with another skin lol
Reminds me of the tipping points of climate science. Any loving your neuroscience videos. Helping me get back into the space. Students have such great resources today.
As someone who loves the idea of being a cyborg, the idea that we understand neurons on this level is so cool... On the other hand, it's terrifying to see how much computational power each individual neuron has when we have almost 90 billion to deal with!!!
As someone who loves being a cyborg (via basic means) we aren't even close to pushing the limits of our brains I think. We are hungry for structure, meaning, and use any of it we can to abstract, predict, and navigate reality and ourselves.
Oh and not exactly about the video topic, but still neuroscience: Is human memory encoded just in the physical geometry of neurons connected with each other, or does the firing pattern of neurons also matter in determining what we remember?
In other words, should a network of neurons keep firing to be able to store memory? and should the neurons fire in a consistent, specific way for the encoded memory to be unchanged?
Both, IIRC. Since "information" can be effectively stored in any kind of "state" of sufficiently stable configurable properties (at any level), including continuous firing patterns, various biochemical balances and electronegativity. But the distinction might be that of short term and long term memory (however "biophysical"changes like the opening and closing of ion channels kind of bridge the gap, in stability and being chemical and physical).
To address current LLMs shortcoming one needs a more potent mathematical modelling toolset. One way to pursue is to extend backpropagation beyond the Euclidian space for more subtle geometries able to tackle higher NN feature space, so beyond complex numbers and even quaternions but leveraging on Clifford algebras. At least this is my personal journey.
Now I also understand just how different in-silico neural nets are from the OG biological neural nets - they completely miss all this time dynamic stuff. And who knows that implications that has for its reasoning/thinking/existing as a consciousness abilities....
There are people working on “spiking neural nets” which try to imitate the spiking behavior to some extent, but I don’t know how closely they match the behavior here, which I certainly didn’t know about. (I found this video to be quite informative)
Such stunning visualizations of 2D to 1D projections. Humans are so blind, we have a tiny viewing port into the world, no wonder we are so ignorant. Hopefully AI can do the more complex thinking for us.
Great video yet again! There is just one thing that i didnt understand. The effect of the 4 types made sense to me, in how they interact and output due to a current. But im not sure i understand how they relate to each other, is there some variable that effect which one of the 4 types a neuron will be?
Thanks! Which particular kind is realized is determined by the biophysical parameters - how fast the channels are opening, what their voltage-dependence looks like, the value of reversal potential for different ions, etc
How well does this model scale? Is there an upper limit to the number of neurons before accuracy loss is statistically significant? I know nothing about modelling in neurology
Do you happen to publish the animations as open-source? I would like to play with the dynamical system on my own, without having to code the entire thing.
The neuron once pushed into the limit cycle stays on the boundary of the limit cycle. That means that its current state is determined by something that happened in the past, I.e. the initial push that sent it into the limit cycle. That is the memory of the neuron; 1-bit of memory that remembers the push.
Imagine AI businesses claiming human level intelligence achievable by bunch of much simpler simulated neurons (the nodes in neural networks) in a much simpler structure (the AI model) than our biological brains. How real/possible could those claims be after watching this video and realizing how complex the dynamics of even a single neuron can be.
So is there evidence that these resonators are involved with what you've previously described as phase-sensitive neurons in the hippocampus? I remember I was having a difficult time imagining a configuration of more conventional integrator neurons that could lead to such a behavior in a robust way.
To try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/ArtemKirsanov . You’ll also get 20% off an annual premium subscription
Artem, first of all, what a great video! I love the animations and how you are able to make this topic so intuitive!
I would like to chat with you about an idea I'm developing about creating a new physics-derived mathematical model of a neuron's physiology in 3 dimensions.
Your help would be greatly appreciated!
Can I contact you in any way?
I am not a neurologist, but as a physicist I really enjoyed this and your previous video. It is always a great feeling to gather new knowledge.
The timing of this one is impeccable.
The animations are so gorgeous. This is what I always dreamed of as a kid - I wanted to *see* into things and see their hidden structures and dynamics.
Thank you!!
Absolutely 😊❤
What software is being used? The designs are very beautiful!
@@ArtemKirsanov Is this video made with Manim?
@@jm3279z3 I'm not 100% sure, but I think it's made with Manim, a python library made by youtuber @3blue1brown for making videos visualizing math.
Some questions:
1. If the history of past inputs is crucial, how far back (in seconds) does it still matter?
Can the input let's say 10 seconds ago still matter for the neuron's output?
2. Will anything interesting happen if the external current is periodic?
3. After a neuron's state enters a limit cycle, how can it escape? Surely that repetitive firing can't be sustained forever, especially if nutritional requirements are considered
4. What kinds of new feature would arise if this "memory effect" is incorporated into artificial neural networks?
That meta-revelation when realizing that an aggregation of billions of neurons like these enter in those multidimensional manifolds to understand themselves, to write that book about themselves, to create this amazing video about themselves, etc. ❤
I've always found in strange that phase space diagrams often resemble neurons themselves.
Don’t underestimate these allies within us:
m.ua-cam.com/video/SbvAaDN1bpE/v-deo.html
@@erawanpencil Well because it is all signals. Encoding information into anything that oscillates is really easy.
Marvellous, the lanscape of neuronal dynamics opens up and shows off its fundamental secrets. Thankyou and thanks to Mr. Izhikevich for his studies.
15:05 I was just thinking it'd be really funny if the two points just outright exploded upon merging, but thought nah, this video seems really serious to do something silly like that. Had to laugh out loud when the explosion really came in.
Keep up the good work, really really nice visualizations!
Thank you!!
I thought the same thing when editing the video, so I added the explosion :D
Wow I'm impressed at how fast you are making videos with this level of animation and editing. Great work, hope you don't get burnt out though like I've seen some other youtubers do from the pressure of wanting to satisfy their audience. Maybe you've gotten efficient in your workflow so it doesn't take that much time. Anyway, cool video.
Thank you!!
the video is really helpful for understanding the big picture behind these diff equations
I have recently finished my MSc in Computer Science, with a focus on data science, and a particular interest on deep learning. While I do find those interesting too, I have, as an extracurricular activity, spent time learning about spiking neural networks and neuromorphic computing, as I find these more biologically-plausible models much more fascinating than the "run-of-the-mill" Hebbian/rate-coding-based models.
While there are many educational videos on the former, intended to help you with visualizing their behaviors, for the likes of 3b1b and many others, there is a severe lack of videos on the latter, and all the learning I've done had to rely only on textbooks and papers.
I'd like to thank you for spending so much time on righting that terrible wrong with videos of such high quality, and allowing me and others to gain a much better intutiotion into these topics.
I just read a Book about system dynamics (on the theoretical math side of things), it’s always stunning to me how beautiful dynamic systems described only by ODEs are, second only to the amazing results of PDEs, mainly the NSE
This is the best applied dynamics presentation I've seen in video, text, books.. anywhere.
Kudos!
Yay! This is great!
Have you ever considered doing a video topic on Active Inference? Seems like a cool topic to add visualisation to how statistical physics combines with theoretical neuroscience.
Yes!! In fact, the idea about Active Inference started the mini-series on Hopfield nets, Boltzmann machines and Cross-Entropy, as stepping stones for that.
I’ll for sure make the active inference video at some point in future, currently I’m still doing background literature mining to understand it myself :D
@@ArtemKirsanovNice! I'd also be interested in active inference and related topics like Fristons free energy principle!
Wow, this was an amazing production with its own uniqueness of presentation. I wish at times you were a little bit slower when presenting key concepts or gave more variations to understand them, just to be able to savor them better.
These video drops make my week
I barely understand anything going on in this video but it's interesting as hell
Love these videos, I'm currently taking a class called Complex Adaptive Systems where we simulate these dynamical systems and visualise them in the phase plane! Super cool
Thank you so much Artem,I can't explain how much value your videos have added to my life.
I respect your clarity and professionalism in these topics.
Phenomenal series Artem! Genuinely impressive work. Please do RNNs next! 🤞
@13:03 : "under the same current": Is the current visible anywhere in the state diagram? Does it correspond to a magnitude of (horizontal) perturbation? If so, that would make me understand your comment: small perturbations cause the state to return to the stable equilibrium point; larger perturbations cause it go into cycling behavior.
Fascinating stuff. It gives a rough idea how some kinds of learning emerges in neurons and body, when those patterns change as you exercise, tuning them better.
As a technicality, yes, there _is_ a biophysical and biochemical difference between two neurons with different histories, so the first statement of the video is sort of wrong. Two _identical_ neurons, down to the atom, given identical inputs, _will_ give you the same outputs, because the history of the neuron is stored in its biochemical state, environment, connections, and other cellular variables. I get that you're trying to introduce a computational topic, but it's important to remember that a computational neuron is much, much simpler than a real one because real neurons have an extremely rich, high-dimensional internal state. Not trying to be negative, I'm a big fan of your work.
Didn’t he say *visually* identical, or something like that? I thought I remembered something that specifically implied that he didn’t mean atom-for-atom identical, but just like, “the same kind of neuron, without any visual differences in like, the length of the axons or dendrites etc.”
@@drdca8263 That would mostly be the biophysical state I'd say. In contrast with the biochemical state, including environmental biochemical factors, which are not fully observable (as they exist at the limit of what is directly observable). This is also where quantum effects can get started to get more involved, so things get a lot more difficult to model and observe.
I hope that advances in neuroscience will allow us to build more efficient artificial neural networks and neuromorphic computers. Thanks for sharing all this cool knowledge with the world!
There are a class of artificial neural networks that have internal memory, known as recurrent networks. There is a sizable body of theory of these structures. An important subset of this theory is that of Content Addressable Memory.
What's good about this method is that it marks off regions in the state space where the behavior is either stable or tends toward a limit cycle. One would think that these could be specified by formulas taken from the differential equations.
This work is similar to that which was done in aerospace vehicle flight mechanics during the 1980s. But there are larger questions that beg for answers: What are the feedback mechanisms in a neuron that are associated with learning? How are individual neurons assigned to a given function? And one more immediate to Artem's work: How stable (repeatable) is the operation of a neuron once it's learned how to support a given function? How does learning affect this picture? Also, how do the H-H equations tie into learning and memory?
The analysis of phase planes is fascinating. It seems to rekindle an exploratory feeling inside me, similar to data exploration via regular graphs. Wondering if this is something I can apply in my day-to-day.
As a kid at junior high (year 8) our overly qualified and very elderly math teacher explained how parameters in engineering were shown on coordinate graphs and thus in the old days solutions for aircraft dynamics sat within the intersectional area of multi paramters. I've used that thought process a lot in my life to very good effect.
Absolutely mind-blowing! These videos are public service educational MASTERPIECES. Never stop teaching you absolute legend
Would have been a nice to have to visualise gradients as slope of a 3d surface. Didn't know anything about neuro dynamics. Very interesting and well explained. One can only start grasping the complex behaviour of linking different types of neurons together and how that changes their activation threshold and patterns 🤯
Very nice! I learned a lot :)
I wonder, if you took the whole 4D dynamics, if you simulated the behavior while having a variety of different external current sources, would the points in the 4D space mostly stay around a particular 2D surface in that 4D space?
Well, I suppose if the system can be well approximated with only one of those 3 channels, then the answer sort of has to be yes, but, I guess I mean…
something like “If you wanted to be a little more precise than the version with voltage and one ion channel, but still wanted to stick to a 2D space, could you do better by picking a surface which is slightly deformed compared to the one being used here?” .
Though, I guess with how the m channel fraction is set to m_\infty , that is already picking a somewhat complicated surface in the 4D space, rather than just setting 2 of the variables to constants.
I guess what I was thinking was like,
“if you set an auto-encoder which tried to encode (V,n,m,h) as (V,x) , and decode back to (V,n,m,h) with minimal error, and where V in the encoding is forced to be the same as the V that is encoded, would the best encoding for this be much different that x=n ?”
This is a very cool video, thank you
Wonderful. Some things that weren't clear: @11:10 : Saddle pt trajectories: they all seem to be getting pushed AWAY (not some towards, others away). @11:30 : why is there a gap in the separatrix (white line) at the bottom? @11:59 The limit cycle and separatrix both seem to be trajectories? I dont see what is the difference between them: is the separatrix a geometric trajectory ? In that case, how is the limit cycle something distinct from the separatrix?
Super interesting and amazingly produced! Congrats!
We could probably use a state space with fractional derivatives, because these have the property of requiring the entire past to calculate a new state, this unlike a classical state space, where if two state trajectories reach the same value and at that moment become zero, the next instant the answer will be the same, however in a fractional space, if two state trajectories reach the same value and become zero again, the trajectories at the next instant will be different, this due to the history effect of the Riemann Liouville integral. A pseudo fractional space is actually a state space of integer order with an uncountable and infinite number of states, that is, it has a distributed variable state.
Very nicely done,it makes me want to study neuronal phasespaces for days.
You are brilliant at explaining and animating these things
I am studying a double BA in mathematics and computer science. I studied dynamical systems of ODEs. Is that a good enough requirement for that book? Or will I need some kind of understanding in biology (which is zero haha)?
Absolutely! The book is really self-contained explaining all necessary background (including electrophysiology in chapter 2)
I am making my thesis in fluid simulation, and its imcredible how similar these topics are. It feels like I'm watching a video on fluid mechanics but with another skin lol
Another epic video - well done mate. It is absolutely worthy of your tattoo.
Reminds me of the tipping points of climate science.
Any loving your neuroscience videos. Helping me get back into the space. Students have such great resources today.
Тёма, харош!
As someone who loves the idea of being a cyborg, the idea that we understand neurons on this level is so cool... On the other hand, it's terrifying to see how much computational power each individual neuron has when we have almost 90 billion to deal with!!!
As someone who loves being a cyborg (via basic means) we aren't even close to pushing the limits of our brains I think. We are hungry for structure, meaning, and use any of it we can to abstract, predict, and navigate reality and ourselves.
great intro to dynamical systems tbh
wonderful. Hughe thanks for this excellent explanation!
I can't help but think about heart rhythm neurons more than about brain neurons. About fibrillation, arrhythmia, cardiac arrest, defibrillation etc
Oh and not exactly about the video topic, but still neuroscience: Is human memory encoded just in the physical geometry of neurons connected with each other, or does the firing pattern of neurons also matter in determining what we remember?
In other words, should a network of neurons keep firing to be able to store memory? and should the neurons fire in a consistent, specific way for the encoded memory to be unchanged?
Both, IIRC. Since "information" can be effectively stored in any kind of "state" of sufficiently stable configurable properties (at any level), including continuous firing patterns, various biochemical balances and electronegativity. But the distinction might be that of short term and long term memory (however "biophysical"changes like the opening and closing of ion channels kind of bridge the gap, in stability and being chemical and physical).
Let’s go 🔥🙌🏻
Fantastic video!
Another Nobel grade research video ❤
All the best 💐
To address current LLMs shortcoming one needs a more potent mathematical modelling toolset. One way to pursue is to extend backpropagation beyond the Euclidian space for more subtle geometries able to tackle higher NN feature space, so beyond complex numbers and even quaternions but leveraging on Clifford algebras. At least this is my personal journey.
Merci paske pout cet direction
As the great Baba Brinkman would say "once you bust them up into pieces / It’s tough to go back, ‘cause... hysteresis"
Beautiful animations!
Very interesting videos. Thanks!
i just had my mind blown
This why I went to school wanting to study ML but ended up studying math and geometry
learning about this and hopfield networks in the context of ai at the same time is breaking my brain, which is ironic.
These videos are an autodidacts dream
Now I also understand just how different in-silico neural nets are from the OG biological neural nets - they completely miss all this time dynamic stuff. And who knows that implications that has for its reasoning/thinking/existing as a consciousness abilities....
There are people working on “spiking neural nets” which try to imitate the spiking behavior to some extent, but I don’t know how closely they match the behavior here, which I certainly didn’t know about.
(I found this video to be quite informative)
Can you please do a video on neuromorphic computing
Love your videos!
Great video! Peace out!
Such stunning visualizations of 2D to 1D projections. Humans are so blind, we have a tiny viewing port into the world, no wonder we are so ignorant.
Hopefully AI can do the more complex thinking for us.
What is your opinion on the Bienenstock-Cooper Munro theory?
Very interesting!!
Great video yet again!
There is just one thing that i didnt understand.
The effect of the 4 types made sense to me, in how they interact and output due to a current. But im not sure i understand how they relate to each other, is there some variable that effect which one of the 4 types a neuron will be?
Thanks!
Which particular kind is realized is determined by the biophysical parameters - how fast the channels are opening, what their voltage-dependence looks like, the value of reversal potential for different ions, etc
How well does this model scale? Is there an upper limit to the number of neurons before accuracy loss is statistically significant? I know nothing about modelling in neurology
Do you happen to publish the animations as open-source? I would like to play with the dynamical system on my own, without having to code the entire thing.
Just uploaded the code to Github :)
github.com/ArtemKirsanov/UA-cam-Videos/tree/main/2024/Elegant%20Geometry%20of%20Neural%20Computations
Ditto here! :)
Wait wait... I don't really get how the phase space having a "basin of attraction" bounded by a separatrix ends up developing hysteresis/memory
The neuron once pushed into the limit cycle stays on the boundary of the limit cycle. That means that its current state is determined by something that happened in the past, I.e. the initial push that sent it into the limit cycle. That is the memory of the neuron; 1-bit of memory that remembers the push.
Amazing!!!
(Artem, phase [feɪz] and face [feɪs] are different words with different pronunciation. The subtitle generator is confused with a 'face portrait'.)
Imagine AI businesses claiming human level intelligence achievable by bunch of much simpler simulated neurons (the nodes in neural networks) in a much simpler structure (the AI model) than our biological brains. How real/possible could those claims be after watching this video and realizing how complex the dynamics of even a single neuron can be.
Is synaptic plasticity involved in changing neuronal dynamics?
That's awesome.
Are neurons often on the boundaries of different "phase space types" (like you described in the video you made on Ising models)?
Hi Artem,
Animations are sick! What tools do you use for these animations?
Manim, Illustrator and Python?
Thank you!
Yep, all three! Mostly creating individual animation video files in matplotlib, and then composing them in after effects
Neuronal Dynamics
Neuronal Network
Neural Network
Neural Dynamics
Can you define the phase space using only the nullclines?
Great ❤❤❤
So is there evidence that these resonators are involved with what you've previously described as phase-sensitive neurons in the hippocampus? I remember I was having a difficult time imagining a configuration of more conventional integrator neurons that could lead to such a behavior in a robust way.
Mealey machines essentially
world class
bro can you make a batchnorm expalnation ?
So SupH and SNIC are a mathemathical model for seizures?
Спасибки
Can I do computational neuroscience if I major in psychology? I love the content but I wish I could understand all the math :(
the heart also has neurons .
can networks of neurons have these properties too?
You're handsome and smart. Really like the channel! Neurons are fascinating.
7 47 does this video needs a background in neuroscience ?
it would be nice and informative, how to code all this with python. step by step...
Ask o1 pro
1,000th like 🎉😎
Wow.
❤
I maybe a coincidence but those diagrams look lile fingerprints for me.
yut