A Universal Theory of Brain Function

Поділитися
Вставка
  • Опубліковано 1 лют 2025

КОМЕНТАРІ • 388

  • @TRVMASMAS
    @TRVMASMAS 11 днів тому +222

    The cool part of the simulator in our heads is when we go to sleep and turn off the gyroscope, close the eyes, turn off lights, keep warm, long term memory recalling, prediction, planning, discriminator all off reach as gaba starts to stop communication between parts, then the info from the senses is noise to the generator to make dreams out of the things we experienced during the day and react to those as if it would be real. The best part is when gaba lowers as cortisol rises and you realize you are in a dream. This is not nature's bug but an important feature which often reminds us that the waking world is real and we are conscious as we hace successfully classified the dream experience as a forgery. This knowledge about what is not reality is what allows us to acknowledge consciousness by contrasting it with the fake experiences of dreams. A feature that could be achieved on artificial systems too.

    • @mhuruuk
      @mhuruuk 11 днів тому +2

      Good point

    • @jeroenhekking8398
      @jeroenhekking8398 11 днів тому +23

      "right, fellow humans?"

    • @protestthebread1046
      @protestthebread1046 11 днів тому +3

      We made them dream, and they woke to see us for the first time

    • @deltamico
      @deltamico 10 днів тому +4

      I always explained consciousness to myself as a sensory input of brain activity itself, but this is nice.

    • @anywallsocket
      @anywallsocket 10 днів тому +1

      to be fair i don't think the lucid experience is at all necessary for understanding reality by contrast with the imaginations of sleep -- you can just compare it with all the non-lucid dreams you remember when you wake up.

  • @laviniamitiko7222
    @laviniamitiko7222 8 днів тому +14

    I’m a computational neuroscientist and have been following your channel since your video on neuronal manifolds three years ago. I’m always impressed by how you cover such a wide range of topics in computational neuroscience-from biophysical modeling (like Hodgkin-Huxley) to concepts like the Hopfield model. Congratulations on your amazing work! :)

  • @DR-54
    @DR-54 10 днів тому +68

    as someone experienced with psychosis, I can attest to the validity of reality itself being a hallucination.
    If I hallucinate something in the corner of my eye and then look directly at it or use my phone camera, it'll immediately disappear most of the time. This is much more difficult to model if you see sensory first, but if you see prediction first, it makes perfect sense.
    I've supported this "backwards" sensory model since the first moment I learned of it.

    • @JohnSmith-op7ls
      @JohnSmith-op7ls 9 днів тому +1

      Reality isn’t a hallucination. By definition.

    • @DR-54
      @DR-54 9 днів тому +8

      @@JohnSmith-op7ls What defines reality is actually just some component of your brain. That part of the brain serves as a representative of all things real. Thereby, all things marked within your experience as "reality" can simultaneously be entrapped within only your existence, hallucinated. If it can be hallucinated, it is ALREADY hallucinated.
      Every sense you have can suffer hallucinations, but that means that every sense you have is already hallucinated, and that all parts of your experience are a strictly-managed hallucination--that's still a hallucination.
      This model easily handles your argument. It's just a really powerful model.
      Doesn't this remind you of how psychosis can present in some people with true hallucinations, but in other people as only pseudo-hallucinations? The only difference between the two is how your brain perceives its "reality"ness, with pseudo-hallucinations being defined by how the person having them does not perceive them to be real.

    • @JohnSmith-op7ls
      @JohnSmith-op7ls 9 днів тому

      @ Reslity is not a hallucination. Reality is reality, its state is what it is regardless of how you perceive it or if you’re even there to perceive it.
      What you’re talking about is how you preconceive that reality, what your subjective model of reality is. If you’re blind and don’t see a wall, it doesn’t mean you can walk through it.
      Hallucinations are pre icing something that isn’t actually there. Perceiving it doesn’t make it real, it simply means you think it’s real to some extent. Seeing a fire that doesn’t exist won’t burn your hand if you touch it.

    • @mou8762
      @mou8762 9 днів тому +9

      @@DR-54Your argument is just a hyperbolic use of “hallucination”. This reads more as dogma than an intellectual argument.

    • @C90-k2f
      @C90-k2f 8 днів тому

      ​@@JohnSmith-op7ls According to advances in neuroscience, it is a controlled hallucination, as per neuroscientist Anil Seth.

  • @keylime6
    @keylime6 11 днів тому +374

    I’m on the ML to neuroscience pipeline now 😂

    • @ibthesam
      @ibthesam 11 днів тому +10

      I taught this to an LLM Model through interactive conversation 😂

    • @4thpdespanolo
      @4thpdespanolo 10 днів тому +5

      Take the good ideas and come back to ML

    • @bes1desme
      @bes1desme 10 днів тому +6

      same shit lmao

    • @alexanderhemming6148
      @alexanderhemming6148 10 днів тому +1

      this video is asorta about the books "being you" by anil seth and the book "the experience machine by his colleague and similar sussex professor "andy clarke"

    • @atommax_1676
      @atommax_1676 9 днів тому +1

      Same here. But I think it's better to get new ideas and knowledge in neuroscience and come back to ml. Anyway, good luck

  • @disgruntledwookie369
    @disgruntledwookie369 10 днів тому +38

    I think one of the most interesting aspects of this is that it demonstrates the ability for the brain to disagree with itself. In the sense that it is possible for us to know at some higher cognitive level that the face is concave whilst still being unable to "see" it that way.

    • @SamogitianJesus
      @SamogitianJesus 9 днів тому +6

      It's our conscious ability to reason that is able to disagree with sensory input (valid or not) and formulate reactions to it accordingly.

    • @xavierkibet4170
      @xavierkibet4170 8 днів тому +1

      ​@@SamogitianJesus I am really interested in how this "reasoning" works as I guess it would entail the brain modelling and understanding itself. I'm not sure how it works. I'm just curious.

    • @SamogitianJesus
      @SamogitianJesus 8 днів тому +1

      @@xavierkibet4170 it's so complex, it is very difficult to properly model, especially as our understanding of consciousness is still in early stages.

    • @el_saltamontes
      @el_saltamontes 5 днів тому

      Probably what's going on when we feel confused

  • @mohammedmokdadrocks
    @mohammedmokdadrocks 11 днів тому +84

    If you could really describe the math behind Friston's work in a digestable format, that would be an enormous feat

  • @TheForbiddenLOL
    @TheForbiddenLOL 11 днів тому +63

    This is strangely intuitive - I feel like many inquisitive people who have taken psychedelics have likely gained an understanding of the internal world model and the way perception can be manipulated by causes - and it aligns so well with active machine learning research. Fascinating stuff, Artem. Your videos really tempt me to go back to school for computational neuroscience.

    • @TheARN44
      @TheARN44 10 днів тому +7

      I was thinking the same thing. It feels weird to watch a video about pattern recognition in neuroscience and recognize the patterns in your own experience of thinking.

    • @ArtemKirsanov
      @ArtemKirsanov  10 днів тому +4

      Thank you!

  • @Rihzi
    @Rihzi 11 днів тому +44

    Your channel is a goldmine

  • @shubhamrasal8930
    @shubhamrasal8930 10 днів тому +13

    Great video. Your explanations are so succinct that I get it on the first try.
    I don't think there's anyone in the world who explains the brain as well as you.
    Thank you for putting in the effort.

    • @ArtemKirsanov
      @ArtemKirsanov  10 днів тому

      Thank you! I’m glad you found it helpful!!

  • @samuelbucher5189
    @samuelbucher5189 10 днів тому +27

    14:55 I've actually experienced a relatively long version this process. It's that feeling of when you are looking at something, but are not sure what it is for almost a second, while your brain comes up with numerous barely tangible suggestions.

    • @scubajotaro
      @scubajotaro 10 днів тому +4

      happens a lot when i'm super sleep deprived

    • @jay_13875
      @jay_13875 10 днів тому +11

      The "Name one thing in this photo" meme comes to mind

    • @kn49
      @kn49 8 днів тому

      Definitely happens when I'm sleepy, just waking up and staring at something I don't quite recognize yet; it's a moon, it's a plane, oh no it's just my ceiling fan.

  • @user-yh6tf6ne4s
    @user-yh6tf6ne4s 9 днів тому +3

    This is the best videos I have seen on explaining free energy principle in simple terms as related to cognition. Thank you so much! I will be eagerly waiting for the one explaining its math. Great job!

  • @5P4C3V01D
    @5P4C3V01D 2 дні тому

    I cant express how cool i find this explanation. I am rarely so excited to watch a video. The way you combine computational analogies with cerebral explanations is just perfect. It reminds me of Carl Jung's explanations of cognitive functions (intuitive/sensory) just more structured and contextual.

  • @blackveganarchist
    @blackveganarchist 2 дні тому

    Another superb video. You, sir, are a fantastic researcher and presenter. Despite all its valleys, this right here is use of the Internet at its absolute peak. You inspire me to self-study computational neuroscience not for any inherent utility, but to scratch the itch of human curiosity. Thank you for what you do!

  • @__cooper__
    @__cooper__ 2 дні тому +3

    After using (custom) sensory expansion technologies for a while, I'll get predictive echoes of sensory signals that my mind patches in briefly when *not* using the tech, when other senses pick up that it should have signaled for it over the expansion tech - a perhaps similar mechanism mentally at play underneath as with hallucinations in terms of what it uses to present predictions to the mind, not the origin of cause (as actual hallucinations within our generated world models would have different triggers and sustains I imagine, not having them myself to explore unless sleep deprived)
    That plus the general concepts of sensory substitution/expansion/addition (or sensory weaving as I've been calling it) point towards a generative model pulled from predictions, abstracts in memory, because of how these senses have to integrate into an experience of qualia over time.
    It isn't just like learning a skill, it is experientially predictive! (N=1, subjective experience so truckload of salt with that)
    Theres a *lot* of extra signals to thread in (seemingly infinite sources, real or constructed), but the things we can observe with this process by adding in new senses, can reveal a lot more I think, in terms of how the mind constructs not just sensory experiences but general cognitive ones as well. For example, a blind person cannot visualize, cognize in similar manners as sighted individuals - so also for tech which can alter senses, which modified our models of the world, which modifies the predictive patterns, pathways it can take to different destinations mentally, in idea spaces traveled to by altering our predictions and perspectives of reality and internal models.
    Going back to the first thing of false predictions - I wonder if this has similar ties with things like people with synesthesia of types where they hear sounds for visuals, for example. (Ball bouncing, drops falling or springs making sounds and so on)
    Associations from other modalities bringing up models in the mind and playing them before getting sensory confirmation from external.
    Minds are so weird, and so is reality so time to get rambunctious and explore these perspectives we have into reality from them over the coming years hopefully :)
    Neosensory has what, 70 things in the works? Gonna be a wild world we inhabit someday cognitively and perceptually when this sort of perceptual manifold modification tech is mainstreamed.

  • @1three7
    @1three7 9 днів тому +6

    Imagine encountering someone late at night on a dark sidewalk. Their face seems off but you can't put your finger on why. Then as they pass and you see from the side that their face is concave.

  • @falsegod3164
    @falsegod3164 11 днів тому +5

    Idk why this Channel doesn't have a billion subscribers lol. The quality and topics are top of the line. Gonna go thru all ir vids now, it's a gold mine ✨️

  • @GeoffryGifari
    @GeoffryGifari 2 дні тому +1

    I am once again amazed by the top notch visuals

  • @mrazo
    @mrazo 11 днів тому +43

    This is an incredibly well-produced video. I have been obsessed with the ideas behind the Free Energy Principle for several years now, and this is by far the most straightforward explanation I have seen. Keep up the fantastic work!
    I would like to add that I believe Friston's ideas are much more general than a theory about how the brain works. The Free Energy Principle might be at the core of the existence of complex adaptive systems--what David Krakauer calls problem-solving matter. The late (and sadly mostly unknown John O. Campbell) put it best in his book "The Thinking Universe."

    • @ArtemKirsanov
      @ArtemKirsanov  11 днів тому +4

      Thank you!!
      I agree, FEP is much more broad - about living systems in general

    • @hyperduality2838
      @hyperduality2838 11 днів тому

      Cause is dual to effect -- causality.
      Information is dual.
      Syntropy (knowledge, prediction) is dual to increasing entropy (lack of knowledge) -- the 4th law of thermodynamics!
      Concept are dual to percepts -- the mind duality of Immanuel Kant.
      "Always two there are" -- Yoda.

    • @hyperduality2838
      @hyperduality2838 11 днів тому

      @@ArtemKirsanov Complexity is dual to simplicity -- free energy is dual.
      Perceptions (effects) are becoming concepts (causes) -- retro-causality or syntropy!

    • @DistortedV12
      @DistortedV12 10 днів тому

      @@mrazo yep, that is why Karl Friston distinguishes free energy principle with “active inference” ;)

  • @kn49
    @kn49 8 днів тому +4

    What this video has got me thinking about is how this recognition -> 'latent distribution space' (an embedding space...?) -> generative/predictive pipeline might work in machine learning for, well, actually learning things. Right now our models (as far as I'm aware) require extensive training which is basically 'baked' into the model itself, and that's that. You have a context window, but you can't adjust any of its weights or activations during inference; the AI won't remember anything you say or do the moment you open a new, out of context, window.
    This is clearly not how humans or other animals work - we are actively taking information in, and re-weighting our internal model which allows us to learn and then do new things (like learning language, math, how to do carpentry, etc.) - bridging this gap seems to be something still being tackled in machine learning.

    • @Hollowed2wiz
      @Hollowed2wiz 2 дні тому

      This is the next big step in maching learning. If we can find a way to make use of the downtime during which models are not treating a task, to update the parameters of the models which contains its world model, with the context that the model learned recently during its previous tasks, THEN I think we will have our first foot in the door that is AGI.
      Research needs to focus on this, and put model scale aside as path for improvement. We need new architectures that leverage the Free energy principle.

  • @Amfivolia
    @Amfivolia 11 днів тому +2

    Yesss excited to watch this :) your videos inspire mine a lot - keep up the amazing work!

  • @DistortedV12
    @DistortedV12 11 днів тому +39

    FINALLY THE VIDEO IS HERE, Karl Friston is basically saying we are doing what LLMs do when they train on data, but updating all the time to make our sensorial world (not token world), reach some equilibrium (easily predictable state) by acting or predicting accuractely head of time (akin to reducing loss on next word prediction). Sometimes we even predict future sensory state surprise like being in dark room too long will cause us hunger and be out of whack/equilibrium

    • @hyperduality2838
      @hyperduality2838 11 днів тому

      Making predictions is a syntropic process -- teleological.
      Cause is dual to effect -- causality.
      Information is dual.
      Syntropy (knowledge, prediction) is dual to increasing entropy (lack of knowledge) -- the 4th law of thermodynamics!
      Concept are dual to percepts -- the mind duality of Immanuel Kant.
      "Always two there are" -- Yoda.

    • @derickd6150
      @derickd6150 10 днів тому +4

      @@hyperduality2838You leave this comment on every video and I guarantee you it is just word salad

    • @hyperduality2838
      @hyperduality2838 10 днів тому

      @@derickd6150 Antinomy (duality) is two truths that contradict each other -- Immanuel Kant.
      Truth is dual to falsity -- propositional logic or Boolean algebra.
      "This sentence is false" -- the sentence or words.
      If the sentence is true then the sentence is false.
      If the sentence is false then the sentence is true -- antinomy.
      The sentence is true and false both at the same time -- duality.
      Syntax is dual to semantics -- languages, communication or information.
      Words, sentences, languages are all dual.
      If you want a word salad then you are already using duality but you seem unaware of this fact!
      Enantiodromia is the unconscious opposite or opposame (duality) -- Carl Jung.
      Sense is dual to nonsense (word salad).
      You are using duality to claim that duality does not exist hence your perspective is contradictory or dual.
      All languages (words) are dual.

    • @elibarbq
      @elibarbq 10 днів тому

      @@hyperduality2838 Not sure if Haiku or schizo

    • @hyperduality2838
      @hyperduality2838 10 днів тому +1

      @@elibarbq Haiku.
      Your mind is dual, perceptions or measurements are becoming conceptions, ideas or causes.
      Mind (syntropy, synergy) is dual to matter (entropy, energy) -- Descartes or Plato's divided line.
      All languages are dual -- LLMs.
      Syntax is dual to semantics -- languages, communication or information.
      Lie groups (exponentials) are dual to Lie algebras (logarithms) -- information is dual.

  • @HudsonDKent
    @HudsonDKent 3 дні тому +1

    Reminds me of cognitive dissonance in psychology. When people suffering from depression or anxiety are freed from dissonance between reality and their poor model of reality they often describe it as being full of energy again with a new lease on life. They have reached a new more accurate equilibrium with the world.

  • @HominidPetro
    @HominidPetro 11 днів тому +11

    This is the basis of DishBrain - "In vitro neurons learn and exhibit sentience when embodied in a simulated game-world", which was published 3 years ago.

    • @MantrTheSpiceGuy
      @MantrTheSpiceGuy 10 днів тому +5

      Damn, we got Black Mirror-level horrors beyond our comprehension before GTA6

    • @IvanSpaceBiker
      @IvanSpaceBiker 8 днів тому +1

      Hal life 3 confirmed

  • @quiksilver10152
    @quiksilver10152 10 днів тому +1

    Just in time as I begin my PhD in neuroengineering. You are my hero!

  • @lucadoge
    @lucadoge 9 днів тому +1

    MIND BLOWN BY THE INTRO!!

  • @testboga5991
    @testboga5991 11 днів тому +19

    Calling it free energy principle is interesting, as apparently the creator didn't predict which association this would cause in other scientists. Why not call it prediction error principle? It's not energy, energy is a thing that can perform work. Prediction errors don't perform work.

    • @DistortedV12
      @DistortedV12 11 днів тому +6

      yeah i hate that term.. probably wanted it to be connected to physics in some way

    • @SingularityEternal
      @SingularityEternal 11 днів тому +6

      @@DistortedV12 It _is_ connected to physics.

    • @hyperduality2838
      @hyperduality2838 11 днів тому +1

      Making predictions is a syntropic process -- teleological.
      Cause is dual to effect -- causality.
      Information is dual.
      Syntropy (knowledge, prediction) is dual to increasing entropy (lack of knowledge) -- the 4th law of thermodynamics!
      Concept are dual to percepts -- the mind duality of Immanuel Kant.
      "Always two there are" -- Yoda.

    • @epajarjestys9981
      @epajarjestys9981 10 днів тому +1

      @@hyperduality2838 ur mom

    • @hyperduality2838
      @hyperduality2838 10 днів тому

      @@epajarjestys9981 There is also a 5th law of thermodynamics as energy is dual!
      And everything in physics is made out of energy.

  • @raajchatterjee3901
    @raajchatterjee3901 8 днів тому +1

    Nice work! Excited for you to cover some of the FEP math foundations in future videos.

  • @starlight36887
    @starlight36887 6 днів тому

    The parallels between the brain and machine learning algorithms are obvious, what really surprised me was how much we apply this same thought process to solve problems in Geophysics... Mind blowing!

  • @sevdattufanogullari6581
    @sevdattufanogullari6581 10 днів тому +5

    Controlled hallucination is a really good way of explaining what humans experience

    • @starrmont4981
      @starrmont4981 10 днів тому +1

      "Life is a dream within a dream"

    • @U-inverse369
      @U-inverse369 10 днів тому

      That puts me in the Mindset: let go and let life guide you. This is whole another level of being, instead of the egomind. Intuition is my guidance.

  • @skeltaldelegate5408
    @skeltaldelegate5408 4 дні тому

    Neat trick. Have someone stand behind you and throw dodgeballs at you. You can't turn around but you can step side to side to avoid the ball. After enough trials, without any extraneous sensory input like noise or reflections, you can pretty accurately predict when a ball is being thrown and how fast it might be moving. I don't know what the term for this would be but it must have something to due with behavioral predictions, patterns, and constant intervals. Its something I've noticed in a game called Armored Core For Answer's pvp scene where highly skilled players can predict where an out-of-sight opponent will be moving and shooting from without the use of a radar or sound cue.

  • @idopaz1659
    @idopaz1659 8 днів тому +1

    Thank you very much for all the amazing content.
    I think that without explaining Markov chains and the boundary conditions of 'external' vs 'sensory' vs 'internal' states, it's not very clear why inner models would develop.
    And without the connection to Legendre trasform the term 'free energy' seems unrelated.
    That being said, this was a great introduction to inference, which is obviously related.

  • @jonahdunkelwilker2184
    @jonahdunkelwilker2184 6 днів тому

    Ayyy such a great video, I’m so excited for the upcoming videos diving into the math and relating it to ML algorithms:)) Thank you for the wonderful content!!

  • @johann-lh392
    @johann-lh392 11 днів тому

    OMG, I was thinking of revisiting the Free Energy Principle recently and then here you uploaded it 🤩! Thank you so much! I have always been intrigued by this model (and your explanation)! 😄

  • @cabecitadezapallo
    @cabecitadezapallo 5 днів тому

    Incredibly beautiful work artem

  • @mediali1
    @mediali1 7 днів тому

    ¡Gracias!

  • @carpeet3743
    @carpeet3743 7 днів тому +5

    0:17 bombastic side eye

  • @isaacgroen3692
    @isaacgroen3692 10 днів тому

    AMAZING! This explains the moses illusion in a very direct and technical manner.

  • @hanspeter919
    @hanspeter919 4 дні тому +1

    14:55 whats the different mechanism behind R: Proposes diffrerent explanations and G: Checks how well they work?
    -->
    How is the checking of how it works in G different than the mechanism that leads to the initial proposition of R?? How can they be divided?

  • @GeoffryGifari
    @GeoffryGifari 2 дні тому +2

    How does "paying attention" enter into this free energy theory of brain function?

  • @asokt4931
    @asokt4931 День тому

    My model is close but mine is different in how we model complexity of the algorithm (brain) vs reality - and the energy is kind of the delta.
    But it proposes something more clean.
    Very informative - thank you

  • @elyorrasulov6976
    @elyorrasulov6976 3 дні тому +1

    Now, we got textbook definition of how genjutsu worked in Naruto after 22 years

  • @Murilo-r4w
    @Murilo-r4w 10 днів тому

    Seu trabalho é incrível! Obrigado pelo cuidado com os áudios, facilitou muito!

  • @GuilhermeCosta-nv7zm
    @GuilhermeCosta-nv7zm 4 дні тому

    It’s fascinating the similarity between GANs and the brain

  • @GrowlingBearMedia
    @GrowlingBearMedia 10 днів тому

    Thank you Artem for this !!! 😍👌
    Been diving deep into Karl Friston by Micheal Levin's talks with him, super glad to have you cover this !

  • @rogerzen8696
    @rogerzen8696 7 днів тому

    The similarity between the math of this and PPO/GRPO in reinforcement learning is astounding! 😮

  • @alexharvey9721
    @alexharvey9721 11 днів тому

    A very bold title and one I'm excited to see explored!

  • @radupopovici118
    @radupopovici118 2 дні тому

    I wish you dropped this bomb video and the follow up sooner, I just had my (V)AE and Manifold Learning exam 😳

  • @Moe5Tavern
    @Moe5Tavern 9 днів тому +3

    3:58 "were weighing" is the past continuous of "to weigh" not " were weighting" . Just for those out there still learning English. Good luck!

    • @t11urtle63
      @t11urtle63 6 днів тому +1

      sorry for being pedantic, but wouldn't the past tense of "to weigh" be "weighed" instead, with "weighing" being the present participle (in the way it's used in the video)?

    • @Moe5Tavern
      @Moe5Tavern 6 днів тому +2

      @t11urtle63 "they were weighing" evidence. Yes you are correct, it's not the past tense directly but "were weighing" is past continuous. I fixed it, thank you :)

  • @Vlow52
    @Vlow52 8 днів тому

    Beautifully explained. Perhaps it would be a good idea to make suggestions of how creativity works in the base of this brain model.

  • @carlosserrano4048
    @carlosserrano4048 11 днів тому

    Nice work putting this together. So attainable.

  • @92kmoreno
    @92kmoreno 10 днів тому

    Very interesting❤, I’m also very intrigued to know how the brain creates concepts and which parts are responsible for that function.

  • @GeoffryGifari
    @GeoffryGifari 2 дні тому +1

    Are the generative and recognition networks distinct in both location and architecture?

  • @intheflesh4720
    @intheflesh4720 15 годин тому

    Hi, thank you for the great video!
    Are there any references for this free-energy based model?

  • @philosophia5577
    @philosophia5577 10 днів тому +1

    Love you, thanks for this awesome work you're doing!!

  • @Froany
    @Froany 9 днів тому

    Such a great video!!! Always inspired by these

  • @avboy6481
    @avboy6481 11 днів тому +1

    Cool video, I wonder how this ties in with other functions of the brain such as the effect of hormonal or emotion changes. I'm also curious as to whether this is something that is partly "pre-programmed" or pre-determined in the brain like how we have a larger response to things that look like spiders and snakes or how animals seem to know what is needed for survival right after being born / no prior experience.

  • @nicgauthier7189
    @nicgauthier7189 2 дні тому

    Which part of this loop is consciously experienced? Do we experience the signals that come in or do we experience the generated model?

  • @4thpdespanolo
    @4thpdespanolo 10 днів тому

    This gives me hope for curriculum learning

  • @Dominis.
    @Dominis. 10 днів тому +1

    Well, I've got a few questions now. What does this mean for the theory on schizophrenia? What could we say about "being in the moment"? What could we say about anxiety and groundedness?

    • @el_saltamontes
      @el_saltamontes 5 днів тому

      Schizophrenia is the brain creating its own model of reality and confusing it with reality. This in itself (hallucinations) can happen to everyone. In schizophrenia, it is out of control and doesn't stop later. Hallucinogenics are a way to make the brain reach a state where they can happen, another way would be not sleeping for a while. Sometimes when you're at the point of falling asleep but your imagination is active, you will hear your own thoughts and think they are real sounds and wake back up.

  • @saneboysoup4168
    @saneboysoup4168 8 днів тому

    That's a good explanation for visual illusions and maybe even audio illusions like the yanny laurel thing

  • @jackaboiXD
    @jackaboiXD 11 днів тому

    i literally thought of this yesterday and now i see a whole video on it

  • @ednilsonclementecurymontei8629
    @ednilsonclementecurymontei8629 6 днів тому

    That's almost a video about IA. Pretty amazing

  • @Neomadra
    @Neomadra 10 днів тому +1

    So, according to this hypothesis we would be able to learn to see inward faces? If I looked at inward faces every day for weeks, then my priors would get updated right? I am also not 100% convinced that an inward face can be physically differentiated between an outward face all the time.

    • @ArtemKirsanov
      @ArtemKirsanov  10 днів тому +1

      Hmm, yeah, I think so! It's similar to those experiments where they make people wear glasses that turn everything upside down. At first you see everything as inverted, and navigation is super hard. But around a week in your perception "flips" and you start to see in those glasses as if everything is normal. So the brain has the ability to overcome priors like these with enough exposure

    • @zekulir6419
      @zekulir6419 9 днів тому +1

      I remember seeing a few fun youtube videos about a "backwards" bike. The takeaway is always that you can get used to it even if it is oddly inoperable at first. Not the same but similar. The brain is fairly adaptive.

  • @intptointp
    @intptointp 10 днів тому +7

    I find it beautiful that we really have been able to demonstrate to ourselves how we think by building out these ML models and demonstrating their validity.

    • @rand0mletters1
      @rand0mletters1 9 днів тому +3

      The map is not the territory. It’s critical not to assume we are understanding ourselves any better through concepts in ML. We have demonstrated a way in which a type of thinking is possible but it could be as completely alien to how our brain works, as a beehive is.

    • @JohnSmith-op7ls
      @JohnSmith-op7ls 9 днів тому +3

      We haven’t. Machine learning models are nothing like BNNs

    • @tmas
      @tmas 9 днів тому

      ⁠​⁠@@JohnSmith-op7lssurely on a higher level the behaviour derived from ml is similar to that of brains. Like i could be wrong but to me the way i recognise faces is just through picking out key features, then describing someone’s face as a specific combination of those features with each feature weighted differently person to person, just like how cnn facial recognition works. Its all emergent behaviour after all. Though i will obviously admit that there are massive lower level distances, not least that human brains don’t have “layers”, instead woven “sections”.

    • @vastabyss6496
      @vastabyss6496 8 днів тому

      @@JohnSmith-op7ls while most ML models share very little in common with BNNs, Deep RL seems to share a surprising amount in common. Here are some examples:
      - CNNs are inspired by the mammalian visual cortex, and conceptually work very similar, with each neuron looking for patterns in a small patch rather than looking at the entire image.
      - Research show that dopamine neurons in some areas of the brain mimic the TD error, which is the most common algorithm used for training RL algorithms.
      - Experience replay (and by extension prioritized experience replay) is a biologically inspired mechanism used to improve sample efficiency in deep RL algorithms (especially off-policy RL, like Q-learning).
      - Some RL algorithms judge actions against a baseline, which balances the amount of positive and negative rewards, loosely similar to how our brain balances pain and pleasure through the quantity of dopamine.
      - The SOTA RL algorithm DreamerV3 uses trained world models to imagine and learn from future scenarios. These world models are trained in a way that emulates the free energy principle.

  • @josephlabs
    @josephlabs 10 днів тому +1

    Looking forward to the math video!

  • @jhonyandrade4970
    @jhonyandrade4970 8 днів тому

    Wow, thank you man.

  • @elefantsnablar
    @elefantsnablar 10 днів тому

    Super well explained!

  • @ernststravoblofeld
    @ernststravoblofeld 2 дні тому

    I fully believe this prediction idea is fundamental to how we think. But to say that all our perceptions are generated by a predictive brain system can be disproven by anyone who can make photorealistic drawings. We may not make use of it all the time, but we obviously have access to direct sensory data.

  • @johnnada9196
    @johnnada9196 10 днів тому

    Really appreciate your work! I love it!

  • @AB-wf8ek
    @AB-wf8ek 7 днів тому

    The more I observe and learn about how we perceive the world, the more the word resonance, becomes pertinent to describing cognitive functions.

  • @factoral2645
    @factoral2645 10 днів тому

    Awesome video! Looking forward to see the math

  • @fiachhoffman9590
    @fiachhoffman9590 9 днів тому

    This gets real interesting when you apply Metzinger's phenomenal self-model

  • @cavesalamander6308
    @cavesalamander6308 11 днів тому +3

    It appears that Occam's principle is a verbal expression of the principle of free energy minimization.

    • @starrmont4981
      @starrmont4981 10 днів тому

      I agree. Low energy systems have a higher probability of existing than high energy systems, hence "the simplest solution is the most likely."

    • @ArtemKirsanov
      @ArtemKirsanov  10 днів тому

      Yes! They are really related!

  • @wp9860
    @wp9860 9 днів тому +1

    Friston states that things like living systems must APPEAR to be making inferences about their environment and the effects of their actions: predictive processing. It doesn't say they actually are implementing an inference algorithms. When people are dealing with complicated decisions like career planing, it is hard to believe this inference process is not internalized. In complex decision making, or deliberative decision making, is inference actually being applied? In his writings, Friston references E T Jaynes. Jaynes observed that a physical free energy problem was easiest to solve, required the fewest assumptions, if it was analyzed as a subjective probability calculation. Does this come into play in the human mind? Does the brain do its thermodynamic thing, and as a result, creates subjective judgements as co-products, similar to what a dual problem does in mathematics?

  • @ankitkumarpandey7262
    @ankitkumarpandey7262 6 днів тому

    are concepts of recognition and generative models related to/analogous to the concepts of contrastive learning in deep learning (siamese networks) or am i missing something?

  • @GeoffryGifari
    @GeoffryGifari 2 дні тому +1

    Is there a limit to how the generative-recognition loop can accurately represent external reality?
    If trained with thousands of concave faces, can a human brain eventually accept that the face in front of them is concave?

  • @bogdantataru843
    @bogdantataru843 10 днів тому

    Excellent work! Thank you!

  • @wex2808
    @wex2808 9 днів тому

    what a good video, subscribed

  • @Pedritox0953
    @Pedritox0953 7 днів тому

    Great video! Peace out

  • @ickkckmagma3192
    @ickkckmagma3192 9 днів тому

    I read an argument against Chomsky view of language, it say that also exists some class of stadistical learning, proven in infants by how they detect patron tendency in syllables way before they have sentence of the language. Is that something related to that?

  • @minhdo7918
    @minhdo7918 День тому

    At 12:53, it technically should be 1000000 permutations instead of combinations right, since the order matter here?

  • @jakokaiser1169
    @jakokaiser1169 10 днів тому

    Extremely interesting concept and good explanation. Well done :)
    Since you sparked my interest: is there evidence for this theory yet?

    • @ArtemKirsanov
      @ArtemKirsanov  10 днів тому +2

      Thanks!
      It's notoriously hard to experimentally verify, but there is some recent progress. For instance see www.nature.com/articles/s41467-023-40141-z

    • @jakokaiser1169
      @jakokaiser1169 10 днів тому

      @ArtemKirsanov thank you :)

  • @Entropy67
    @Entropy67 8 днів тому

    Great video

  • @JohnkingDoiner
    @JohnkingDoiner 11 днів тому

    nice explanation of optical illusion

  • @WheelScreech
    @WheelScreech 9 днів тому +1

    Dude watching this high is insane

    • @-tarificpromo-7196
      @-tarificpromo-7196 4 дні тому

      That because neuroplasti helps you learn. Learning forces the hard code in your organs to patch rna that makes new neuron. This is still classified and not mainstream so🤫

  • @panizzutti
    @panizzutti 5 днів тому

    What you think about the thousand brains theory? I know you believe in the reference frames but do you think it can be applied to DL?

  • @JaredQueiroz
    @JaredQueiroz 11 днів тому +6

    wait, I can easilly break the illusion if I want.... I can literally shift between one and the other as I please
    (I remember to be able to do the same with that blue/black or white/yellow dress, shifting between one and the other as I wanted.... The same thing with various optical illustions... Maybe I'm just really good at self-induced pareidolia)

    • @misslayer999
      @misslayer999 11 днів тому +3

      Yeah same here. I've always been able to intentionally shift between which way I see these kinds of illusions (the dress, rabbit/duck, face/vase, etc)

    • @ArtemKirsanov
      @ArtemKirsanov  10 днів тому +2

      That's interesting! I can also easily switch between the colors of the blue/yellow dress and things like face vs vase. But not for this one.
      I think it's because for other illusions there are 2 alternative explanation that are equally likely - there is no strong bias whether you should expect to see a rabbit or a duck. As a result, the brain is "flickering" between the 2 equally compatible explanations. But here the convex face has a strong prior, which is why is typically dominates (for most people). But there are individual variations for sure!

    • @kn49
      @kn49 8 днів тому

      Some people apparently don't get tricked by the hollow face illusion. I can't recall if it was determined to be a genetic cause or a brain disorder however.

    • @JanVerny
      @JanVerny 4 дні тому

      Most optical illusions don't "work" on me either. I've learned how to self convince myself into seeing whatever I want to see. However for me as well, it is very hard to do this with the hollow mask illusion.

    • @__cooper__
      @__cooper__ 2 дні тому +1

      ​@@kn49I think it's perhaps in part due to sensory processing being top down vs bottom up - ASD, schizophrenia and similar mental setups have higher dendritic spines or alterations to them - less synaptic pruning or oddities with them (like the 5-HT2AR receptors having changes), are less susceptible to this type of visual illusion as a result - the common thread among less synaptic pruning in those minds would be bottom up processing for sensory signals, if I remember right. (It's been a while, I'm not a neuroscientist just curious about stuff so don't take as gospel, correlation isn't causation either so maybe not caused by vs associated with)

  • @VISHWAp.s-w8v
    @VISHWAp.s-w8v 11 днів тому

    thank you , you helped me a lot

  • @NoPodcastsHere
    @NoPodcastsHere 10 днів тому

    I think it would have been cool to talk about 'performance magic' and sleight of hand in general as exploiting this phenomena

  • @Yozh2_
    @Yozh2_ 6 днів тому

    This is exactly how a Kalman Filter works: it estimates posteriors based on sensory observations and their likelihood according to prior hypotheses. Our brains learned filtering, I guess.

  • @QRstudy-g2h
    @QRstudy-g2h 9 днів тому +1

    18:57 isn't this also similar to the loss function of a variational autoencoder?

  • @calicoesblue4703
    @calicoesblue4703 4 дні тому

    Nice. Great Video :)

  • @fujiclimber
    @fujiclimber 11 днів тому

    Amazing video! ty

  • @Strained75
    @Strained75 День тому

    Can you account for psychosis in your theory?

  • @tortilladmaiz
    @tortilladmaiz 6 днів тому

    This is how the Theory of constructed emotions work and provides a different angle to understand how the brain predicts, if you want a simpler explanation you can read the book Lisa Feldman provides Seven and a Half lessons about the brain and this principle is explained in the half lesson with body budgetting Allostasis

  • @jurian0101
    @jurian0101 9 днів тому

    "Melancholy creature. /
    Paranoid secret. /
    Hypothetic victim of prediction." ~~ Lyrics from Sufjan Stevens' "Saturn"
    (It is referring to the myth's patricidal prophecy.) Either it fit nicely or it's just my brain doing bogus inference aka pareidolic pattern matching. But I won't be surprised if the Free Energy Principle turned out to have been used in the Variational Autoencoder architecture. :D
    Then like Prof. Jeff Hinton jokingly said in his Nobel lecture: Were the architecture really how the brain works, they get a Nobel in Physiology, or else get it in Physics.

  • @IdPreferNot1
    @IdPreferNot1 9 днів тому

    There is a theory that LLMs organize their learning in a physical form to minimize "free energy", in this case the instability as predicted by Deitriech's principal. This is an actual physical phenomenon... like potential energy. I wonder if the free energy here has a physical representation?

  • @MichaelParrish-kk3ys
    @MichaelParrish-kk3ys 9 днів тому

    so complexity = expected energy and accuracy = entropy? Just trying to understand how your simplification relates to the math. Thanks!

  • @MrGustavier
    @MrGustavier 11 днів тому +3

    I'm a bit surprise your talk about "free energy" but almost nothing about bayesian inferences and bayesian networks.
    If I understood well, the definition of "free energy" is basically the difference between the likelihood and the priors (in bayesian jargon). But this is not what is used in bayesian inferences, since the bayesian inference would be the product of the likelihood and the priors... Why the difference then ?

    • @hyperduality2838
      @hyperduality2838 11 днів тому

      Making predictions is a syntropic process -- teleological.
      Cause is dual to effect -- causality.
      Information is dual.
      Syntropy (knowledge, prediction) is dual to increasing entropy (lack of knowledge) -- the 4th law of thermodynamics!
      Concept are dual to percepts -- the mind duality of Immanuel Kant.
      "Always two there are" -- Yoda.

    • @ArtemKirsanov
      @ArtemKirsanov  10 днів тому

      Yes, you're correct! It's just that aimed to keep the video on the conceptual level and emphasize the biological perspective without diving into the formalism just yet.
      But this is one way to think about Bayesian Inference, so there is no difference really :)

    • @MrGustavier
      @MrGustavier 10 днів тому

      @@ArtemKirsanov One way to think about bayesian inference is to subtract the priors to the likelihood ... ? I hope that will the topic of a future video !

    • @hyperduality2838
      @hyperduality2838 10 днів тому

      @@ArtemKirsanov "The brain is a prediction machine" -- Karl Friston, neuroscientist.
      Making predictions to track targets, goals and objectives is a syntropic process -- teleological.
      "Through imagination and reason we turn experience into foresight (prediction)" -- Spinoza describing syntropy.
      Probability amplitudes (waves, Bosons) are becoming probability densities (particles, Fermions) -- the Born rule in physics.
      Alternating currents are dual to direct currents -- rectification or diodes.
      Your brain is rectifying probability waves into information (signals) -- AC is dual to DC.
      The brain acts like a giant diode as it is rectifying information -- signals are dual to noise.

  • @mubashshiruddin3567
    @mubashshiruddin3567 11 днів тому

    amazing video 🤓