Logarithmic nature of the brain 💡

Поділитися
Вставка
  • Опубліковано 5 чер 2024
  • Shortform link:
    shortform.com/artem
    My name is Artem, I'm a computational neuroscience student and researcher.
    In this video we will talk about the fundamental role of lognormal distribution in neuroscience. First, we will derive it through Central Limit Theorem, and then explore how it support brain operations on many scales - from cells to perception.
    REFERENCES:
    1.Buzsáki, G. & Mizuseki, K. The log-dynamic brain: how skewed distributions affect network operations. Nat Rev Neurosci 15, 264-278 (2014).
    2.Ikegaya, Y. et al. Interpyramid Spike Transmission Stabilizes the Sparseness of Recurrent Network Activity. Cerebral Cortex 23, 293-304 (2013).
    3.Loewenstein, Y., Kuras, A. & Rumpel, S. Multiplicative Dynamics Underlie the Emergence of the Log-Normal Distribution of Spine Sizes in the Neocortex In Vivo. Journal of Neuroscience 31, 9481-9488 (2011).
    4.Morales-Gregorio, A., van Meegen, A. & van Albada, S. J. Ubiquitous lognormal distribution of neuron densities across mammalian cerebral cortex. biorxiv.org/lookup/doi/10.1101... (2022) doi:10.1101/2022.03.17.480842.
    OUTLINE:
    00:00 Introduction
    01:15 What is Normal distribution
    03:03 Central Limit Theorem
    04:23 Normality in biology
    05:46 Derivation of lognormal distribution
    10:20 Division of labour in the brain
    12:20 Generalizer and specialist neurons
    13:37 How lognormality arises
    15:19 Conclusion
    16:00 Shortform: sponsor message
    16:54 Outro
    CREDITS:
    Icons by biorender.com/
    Mathematical animations were created using Manim CE python library - www.manim.community/

КОМЕНТАРІ • 309

  • @ArtemKirsanov
    @ArtemKirsanov  2 роки тому +25

    Join Shortform for awesome book guides and get 5 days of unlimited access! shortform.com/artem

    • @adityakulkarni4549
      @adityakulkarni4549 Рік тому +1

      @Artem Kirsanov the text at 15:03 doesn't seem to correspond to the biorxiv paper you have linked in the description 😅

  • @defenestrated23
    @defenestrated23 Рік тому +456

    Log-normal distributions are closely related to pink noise (power is 1/freq), since d(log) = 1/x. This is said to be the hallmark of self-organization. It shows up everywhere you have fractal symmetry: brains, turbulence, finance, weather, even migration patterns.

    • @whannabi
      @whannabi Рік тому +3

      The everything

    • @Maouww
      @Maouww Рік тому +6

      yep I was totally thinking of Quantitative Linguistics the moment log-normal distribution cropped up

    • @Simonadas04
      @Simonadas04 Рік тому +4

      D(ln)=1/x

    • @luker.6967
      @luker.6967 Рік тому +4

      @@Simonadas04 some people prefer log to denote ln, since log base e is more common in pure mathematics.

    • @Simonadas04
      @Simonadas04 Рік тому

      @@luker.6967 i see

  • @aitor9185
    @aitor9185 Рік тому +66

    Great video!
    Super happy to see my paper about neuron densities made it into this video 15:12 :)

    • @isaac10231
      @isaac10231 Рік тому +3

      wow, the UA-cam algorithm is crazy

  • @elismith4040
    @elismith4040 Рік тому +18

    As an electrical engineer who has also always been extremely interested in neuroscience, stumbling across this channel is pure gold.

  • @BiancaAguglia
    @BiancaAguglia 2 роки тому +134

    Thank you for all the effort you put into your videos, Artem. You're doing a great job taking complex topics and making them easy to visualize and to understand.
    In case you're looking for topic suggestions for future videos, I have a few:
    1. curriculum you would follow if you had to start from scratch and wanted to teach yourself neuroscience (computational or, if you prefer, a different concentration)
    2. sources of information neuroscientists should follow in order to stay current with the research in the field (e.g. journals, labs, companies, people, etc)
    3. list of open problems in neuroscience
    Thank you again for your videos. Keep up the great work. 😊

    • @ArtemKirsanov
      @ArtemKirsanov  2 роки тому +25

      Thank you for wonderful suggestions!
      Right now, I'm actually preparing the script for a video about getting started with computational neuroscience! So stay tuned ;)

    • @BiancaAguglia
      @BiancaAguglia 2 роки тому +3

      @@ArtemKirsanov Thank you. I look forward to it. 🙂

    • @leif1075
      @leif1075 2 роки тому +1

      @@ArtemKirsanov Can you clarify how exsctly normal.dostrobtions arise eve tally even when you have wildly extreme and different values? Is it basically just evening out?

    • @iwanttwoscoops
      @iwanttwoscoops Рік тому +1

      @@leif1075 pretty much! look at height; there's a wide variance, and in any town you can find a tiny person and a giant. But overall, most people are average height, and these outliers are rare. Hence normal

  • @fabiopakk
    @fabiopakk 2 роки тому +32

    Excellent video, Artem! I enjoy a lot watching your videos, they are incredibly well done and explained. I particularly liked the ones involving topology.

  • @giacomogalli2448
    @giacomogalli2448 Рік тому +3

    Your videos are fantastic for anyone interested in neuroscience!
    I never studied it in depth but it's fascinating and I'm discovering it

  • @lbsl7778
    @lbsl7778 Рік тому +1

    This channel is the most beautiful thing that has happened in my life this week, maybe even this month. Thank you for your effort, greetings from Mexico!

  • @someone5781
    @someone5781 Рік тому

    This was one of the most mindblowing videos I've seen in a while. Such amazing content Artem!

  • @emmaodom7201
    @emmaodom7201 Рік тому +4

    Wow you are such an effective communicator!!! Your insights were very clear and easy to understand

  • @omaryahia
    @omaryahia Рік тому +2

    I am happy I didn't skip this video, and now I know another great channel for math and science
    thank you Artem
    great quality, and topics I am interested in

  • @ImBalance
    @ImBalance Рік тому +1

    The best explanation of logarithms I've ever seen. How surprising that a neuroscience UA-cam video managed to describe the concept and its application so much more completely than any of the math classes I've ever taken. Well done!

  • @nedfurlong8675
    @nedfurlong8675 Рік тому +1

    Your videos are fantastic. What an excellent communicator!

  • @threethrushes
    @threethrushes Рік тому +3

    I studied statistics for biologists at university some 25 years ago.
    Your explanations are logical and intuitive. Good job Artem.

  • @umerghaffar4686
    @umerghaffar4686 9 місяців тому

    I can’t believe this valuable information is available on YT for free!! I just finished my a level studies and am keen on biology and neuroscience so I loved the fact I got to see a computational perspective on the brain. Makes me wonder where else can the log-normal distributions be seen in the body or what other mathematical models can be deduced in Biological systems.
    Keep up!

  • @SudhirPratapYadav
    @SudhirPratapYadav Рік тому

    one word, EXCELLENT!!! So happy to watch this.

  • @ward_heimdal
    @ward_heimdal 7 місяців тому

    This is definitely one of my favourite channels now. Up there with 3B1B. You explain things really well, and the topics you cover are just my cup of tea.

  • @valor36az
    @valor36az Рік тому +2

    Your videos are such high quality thanks for the efforts.

  • @95VideoMan
    @95VideoMan Рік тому +4

    Thanks! This is fascinating and useful information. You presented it so clearly, and the visuals were top notch. Really appreciate this work.

  • @danin2013
    @danin2013 2 роки тому +3

    i love your channel and the way you explain everything with such detail!

  • @Boringpenguin
    @Boringpenguin 2 роки тому +8

    On a completely unrelated note, the lognormal distribution also pops up in the field of mathematical finance! In particular, it is used to model the stock prices in the Black-Scholes model.

    • @ArtemKirsanov
      @ArtemKirsanov  2 роки тому +2

      Wow, cool info! Thanks for sharing

    • @BiancaAguglia
      @BiancaAguglia 2 роки тому +8

      The wikipedia page on log-normal distribution has some examples too:
      - city sizes
      - number of citations of journal articles and patents
      - surgery durations
      - length of hair, nails, or teeth
      - length of chess games
      - length of comments in forums, etc.
      It's an interesting read.

  • @KaliFissure
    @KaliFissure Рік тому

    Most stimulating content in ages! 👍🖖🤘

  • @TheBrokenFrog
    @TheBrokenFrog Рік тому +2

    This is exactly what I was looking for today!! How strange that I found this exact topic here. Thank you :)

  • @uquantum
    @uquantum Місяць тому

    Terrific video, Artem. Mind-blowing: not only the production values, but in particularly highly engaging content. Thank you for sharing with us. Fantasti❤

  • @stevenschilizzi4104
    @stevenschilizzi4104 Рік тому

    Brilliant, Artem! And fascinating.

  • @horizn9982
    @horizn9982 5 місяців тому

    Wow man amazing videos, I wanna do research as a computational neuroscientist and your content is really what I was looking for!

  • @quentinmerritt
    @quentinmerritt Рік тому +1

    Dude that’s so cool! I’m a first year grad student at OSU looking to research Nuclear Theory! And I’ve been watching your videos since late high school, I’d love to see a series on QFT!

  • @lucascsrs2581
    @lucascsrs2581 Рік тому

    This channel is a hidden gem. +1 subscriber

  • @khwlasemaan8135
    @khwlasemaan8135 2 роки тому

    Impressive ... neuroscience is a powerful topic

  • @alkeryn1700
    @alkeryn1700 Рік тому +1

    once wrote a spiking neural net with around a million neurons
    some neurons would fire almost every iteration, some every 10 iteration, and some would average once every thousands.
    didn't bother to plot the distribution but that could have been fun.

  • @suhanovsergey
    @suhanovsergey Рік тому

    Thanks for high quality content! I love the use of palmistry as an example of a random process at 3:42 :)

  • @luwi8125
    @luwi8125 Рік тому

    Thank you for a great video! Very interesting topic and very nice of you to show the article to make people more likely to actually look it up for themselves. 😀👍

  • @neutrino9
    @neutrino9 Рік тому

    Truly amazing topics, thank you !

  • @Luper1billion
    @Luper1billion Рік тому +1

    Its interesting because I thought the video would be able how the brain perceives information logarithmicly, but it actually shows its actually physically built logarithmicly as well.

  • @Jeffben24
    @Jeffben24 Рік тому

    Thank you Artem ❤

  • @NoNTr1v1aL
    @NoNTr1v1aL 2 роки тому +2

    Absolutely amazing video! Subscribed.

  • @notequartocr3502
    @notequartocr3502 Рік тому

    Your videos have a Good dinamic and didacts and the edictions is verry harmony, its really impressive why you not have 1 million of subscribers, more one subscriber from brazil 🇧🇷

  • @gz6963
    @gz6963 Рік тому

    Thanks for the clear explanation, great video

  • @matveyshishov
    @matveyshishov 2 роки тому

    Beautiful, thank you!

  • @bofloa
    @bofloa Рік тому

    This lecture is wow...thanks

  • @ruperterskin2117
    @ruperterskin2117 Рік тому

    Cool. Thanks for sharing.

  • @kapteinskruf
    @kapteinskruf Рік тому

    Outstanding!

  • @accountname1047
    @accountname1047 Рік тому

    This video is fantastic

  • @editvega803
    @editvega803 Рік тому

    Wow! An amazing video! Thank you very much Artem. You have a new suscriber from Argentina 🇦🇷

  • @enricoginelli3405
    @enricoginelli3405 Рік тому

    Super cool video Artem! Keep up!

  • @lakshminarayananraghavendr319

    Thanks for the informative video

  • @isaacharton7851
    @isaacharton7851 Рік тому

    Very productive vid. It inspires me to be productive as well.

  • @Tabbywabby777
    @Tabbywabby777 Рік тому +1

    Great animations and explanations. However, as a fellow scientist and learner I wish that you had presented the central limit theorem and the derivation of the log-normal distribution in it's full mathematical glory. I feel that half the power of MANIM is in it's ability to concisely represent both the graphical and textual aspects of mathematics, to avoid one of them is to kneecap the platform. As a learner it is essential that I build associations between the graphical and textual representations. I think you did this better in your video on wavelets!
    Anyway, thank you so much for taking the time to create these videos. I am sure that they will make a lasting contribution to the field of computational neuroscience and inspire students for years to come.

  • @peterbenoit5886
    @peterbenoit5886 Рік тому

    Wonderful content on a most interesting topic.

  • @chistovmaxim
    @chistovmaxim Рік тому

    really interesting video for someone reseraching on NN, thanks!

  • @justmewendy6461
    @justmewendy6461 Рік тому

    Very good. Thank you.

  • @knaz7468
    @knaz7468 Рік тому

    Really nice explanation, thanks!

  • @666shemhamforash93
    @666shemhamforash93 2 роки тому +50

    Great video!
    I would love to see a follow-up video on neuronal avalanches and the critical brain hypothesis. A nice review on the topic that you might find useful is "Being critical of criticality in the brain" by Beggs and Timme (2012).

    • @ArtemKirsanov
      @ArtemKirsanov  2 роки тому +6

      Thank you! I will definitely look into it!

    • @leif1075
      @leif1075 2 роки тому

      @@ArtemKirsanov Thank you for sharing Artem. I hope you can respond to my message about how to deal with scientific papers and dealing with math when you can. Thanks very much.

    • @a__f
      @a__f Рік тому +3

      Interestingly, I used to work in solar physics where avalanches are also a commonly used model for how solar flares occur

  • @mapnzap
    @mapnzap Рік тому

    That was very well done

  • @RanLevi
    @RanLevi Рік тому

    That was amazing! Great work, Artem - love your videos :-)

  • @sunkojusurya2864
    @sunkojusurya2864 Рік тому

    Insightful video. 👍 Keep going.

  • @bovanshi6564
    @bovanshi6564 2 роки тому +1

    Great video, really interesting!

  • @jpcf
    @jpcf Рік тому

    High quality content here!

  • @SuperEbbandflow
    @SuperEbbandflow 2 роки тому +4

    Excellent video, keep the great content coming!

  • @ASMM1981EGY
    @ASMM1981EGY Рік тому +1

    Awesome episode

  • @QasimAlKhuzaie
    @QasimAlKhuzaie Рік тому

    A very interesting video. Thank you very much

  • @lambdo
    @lambdo 2 роки тому +1

    Wonderful explanation of gaussian distribution

  • @aaronsmith6632
    @aaronsmith6632 Рік тому +3

    Freaking fascinating. I imagine these properties would transfer to neutral network design as well!

  • @user-mc2gm6fz9i
    @user-mc2gm6fz9i 2 роки тому

    great video analysis

  • @zwazwezwa
    @zwazwezwa Рік тому

    Excellent video, much appreciated!

  • @whiteoutTM
    @whiteoutTM Рік тому

    fascinating and engaging!

  • @mukul98s
    @mukul98s Рік тому +18

    I had studied advanced mathematics in my last semester but never understand the concept of random variables and distribution with that much clarity.
    Amazing video with great explanation.

  • @CoolDudeClem
    @CoolDudeClem Рік тому +2

    I just want to probe the parts of my brain where the picture and sounds form so I can record my dreams and then play them back like a movie.

  • @wilsonbohman3543
    @wilsonbohman3543 Рік тому

    i have a heavy background in audio production, and i figured this made a lot of sense given the logarithm nature of how we perceive sound, it’s cool to see that this is just inherent to our brains in general

  • @TheDRAGONFLITE
    @TheDRAGONFLITE 2 роки тому

    Nice video! Great pacing

  • @MattAngiono
    @MattAngiono Рік тому +2

    New to this channel and finding this very intriguing!
    It seems to even parallel the patterns in how we actually think on the macro level.
    Are you familiar with cognitive scientist and UA-camr John Vervaeke?
    I bet you two could have a wonderful conversation that both audiences would enjoy!
    He speaks much more about the big picture of cognition, yet so much of it involves these similar patterns with a split in extending out vs honing in.

  • @AswanthCR7
    @AswanthCR7 Рік тому +19

    Loved the video and the presentation :) Can biasing the weights of an artificial neural network toward such a log normal distribution provide any advantage?

  • @PalCan
    @PalCan Рік тому

    Awesome video

  • @buckrogers5331
    @buckrogers5331 Рік тому +5

    I've been interested in brain science since I was a kid. This is definitely understandable to a 10 year old kid. Well done! More content please!! And you shud have more subscribers!!

    • @soymilkman
      @soymilkman Рік тому +4

      damn you must be hella smart for a 10 yr old

  • @dalliravitejareddy3089
    @dalliravitejareddy3089 Рік тому

    great effort

  • @goid314
    @goid314 Рік тому

    Interesting video!

  • @geodesic616
    @geodesic616 Рік тому

    Why Guys like this are so under subscribed . Wish you success

  • @asdf56790
    @asdf56790 Рік тому

    Amazing video!

  • @EPMTUNES
    @EPMTUNES Рік тому

    Great video!

  • @crimfan
    @crimfan Рік тому +1

    Lognormal is the central limit theorem for RVs that combine in a multiplicative fashion (as long as the tails aren't too heavy).

  • @tringuyenuc2381
    @tringuyenuc2381 Рік тому +1

    This is a very nice connection of logarithmic perception and biological features of humans. I wonder if there is an analogy explanation of the rule 70-30?

  • @anthonyhsiao4560
    @anthonyhsiao4560 Рік тому

    Amazing video. Thank you.
    I would guess that one or a mix of the following two are at the (physical) root of this :
    1) either this is due to the "serial nature" of things eg they are connected in series and hence they are physically embodying a multiplication. One neuron firing triggering the next one triggering the next one etc. Since its a multiplication therefore it becomes log normal.
    2) alternatively it could be because of the hierarchical structure of the network (brain). You mentioned there is a spectrum of the general (higher level) vs specialist (lower level) neurons, and since they are organized hierarchical, there is again this serial-Ness, since a higher level neuron might be triggered by a lower level neuron.

  • @abhishek101sim
    @abhishek101sim Рік тому

    Helpful content, with a good lowering of entry barrier for someone uninitiated. I learned a lot. A small but important point: sum of independent random variables is not normally distributed, but mean of independent random variables is normally distributed.

    • @stipendi2511
      @stipendi2511 Рік тому

      Technically you're right, since the limit of the sum of the random variables diverges. However, I don't think stressing that point helps with conceptual understanding, since in practice all sums are finite, and then the sum approximately resembles the SHAPE of a normal distribution. Once you normalize it, which is what taking the mean does, you obtain a probability distribution.

    • @Abhishek-zb3dp
      @Abhishek-zb3dp Рік тому

      Technically it's not the mean but mean times sqrt(n) where n is the number of samples taken to get the mean and under the limit that n is large. Otherwise the mean would just be a point as n becomes very large.

  • @reocam8918
    @reocam8918 Рік тому +1

    Awesome! Can I ask how do you create these fantastic animations? Thanks!

  • @Treviisolion
    @Treviisolion 11 місяців тому

    The shape certainly makes some intuitive sense. Extremely short firing rates are more likely to be mistaken as random noise so a neuron wants to be above that limit. However, it doesn't want to be too far above it, because firing is energy-intensive and the brain is already a calorie-hungry organ. At the same time if information is encoded partially in the firing rate, then utilizing only a small subsection of possible firing rates is not information efficient, so neurons that need to be heard more often would be incentivized to use lower utilized firing rates as there is less noise in those channels. I don't know whether that explanation would necessarily result in a log-normal distribution as opposed to a low-median normal distribution, but it is interesting to see roughly the shape I was thinking emerge at the end.

  • @chipsi21
    @chipsi21 2 роки тому

    Wow so awesome, thanks a lot 🤙🏻🤙🏻

  • @TheKemalozgur
    @TheKemalozgur Рік тому +1

    Whenever there is a log-normal behaviour, we can think of connnected and combined behaviour of things, namely evolutionary step. order of importance of things can only stabilized enough in a logarithmic fashion.

  • @cynido
    @cynido Рік тому +1

    Brain is the most complex and fundamental part of our body - Brain

  • @maxmyzer9172
    @maxmyzer9172 Рік тому

    3:21 this video so far is more helpful than the statistics course i took

  • @imsleepy620
    @imsleepy620 2 роки тому +3

    So glad I'm subscribed. Great video! Would love to see a video on neural field models in the future.

  • @martinr7728
    @martinr7728 Рік тому

    I'm quite impressed how you present all the information, very concise and clear

  • @carlotonydaristotile7420
    @carlotonydaristotile7420 2 роки тому

    Cool video.

  • @blazearmoru
    @blazearmoru Рік тому

    I first discovered this when my psych professor explained that we experience loudness not additionally but through log.

    • @U20E0
      @U20E0 Рік тому +1

      i may of course be wrong, but i do not think that is related.
      But rather that the receptor cells themselves lose sensitivity with higher input, i don’t know anything about how those cells work, but it may possibly be due to a limited store of chemicals, like it is with visual receptors.

  • @anywallsocket
    @anywallsocket Рік тому

    What it means is that the things we measure to be lognormal, we are assuming an additive linearity, when likely there exists a more natural measure of the thing in a multiplicative non-linearity, e.g., ignoring the fact that the thing is self-interacting, or grows from itself.

  • @robinguillard7042
    @robinguillard7042 Рік тому +3

    Very interesting thank you for the vidéo. Now I would like to know: could you explain why when we measure EEG we measure signals with power up to 50Hz, with notably an alpha peak in 10Hz, knowing that as you said we have a vast majority of neurons at 1 Hz?
    I would suppose it's contructed by the sommation of neurons of different phases but it would be interesting to dig up a little more!

    • @U20E0
      @U20E0 Рік тому

      Maybe they are more interested in the minority

  • @YonatanLoewenstein
    @YonatanLoewenstein Рік тому

    Very nice!
    An explanation of why the distribution of firing rates in the cortex is log-normal can be found in Roxin, Alex, et al. "On the distribution of firing rates in networks of cortical neurons." Journal of Neuroscience 31.45 (2011): 16217-16226.

  • @sytelus
    @sytelus Рік тому

    Thanks!

  • @inversebrah
    @inversebrah Рік тому

    learned a lot, ty

  • @lucusekali5767
    @lucusekali5767 Рік тому

    You deserve subscribe

  • @jeromewelch7409
    @jeromewelch7409 Рік тому

    Nice!