A Map of Social Space in Your Brain

Поділитися
Вставка
  • Опубліковано 31 тра 2024
  • Shortform link:
    shortform.com/artem
    My name is Artem, I'm a computational neuroscience student and researcher. In this video we talk about how hippocampus serves a "social map", representing information about conspecific individuals at different levels of abstraction.
    Patreon: / artemkirsanov
    Twitter: / artemkrsv
    OUTLINE:
    00:00 Introduction
    03:30 Overview of physical place cells
    04:57 Social information in physical space
    11:28 Abstract social space
    15:36 Recap
    16:12 Shortform
    17:03 Outro
    REFERENCES:
    1. Omer, D. B., Maimon, S. R., Las, L. & Ulanovsky, N. Social place-cells in the bat hippocampus. Science 359, 218-224 (2018).
    2. Tavares, R. M. et al. A Map for Social Navigation in the Human Brain. Neuron 87, 231-243 (2015).
    3. Schafer, M. & Schiller, D. Navigating Social Space. Neuron 100, 476-489 (2018).
    4. Eichenbaum, H. The Hippocampus as a Cognitive Map … of Social Space. Neuron 87, 9-11 (2015).
    CREDITS:
    Icons by biorender.com
    3D model of a bat (CC license) - skfb.ly/6WEYy
    Brain 3D models were modeled with Blender software using publicly available BrainGlobe atlases (brainglobe.info/atlas-api)

КОМЕНТАРІ • 117

  • @ArtemKirsanov
    @ArtemKirsanov  Рік тому +5

    Join Shortform for awesome book guides and get 5 days of unlimited access! shortform.com/artem

    • @yashaswikulshreshtha1588
      @yashaswikulshreshtha1588 Рік тому

      To be honest, you deserve a lot more subscriber and viewer base. I love your content and keep making these videos!. Btw you could also cover other things about brain, for example psychology from a perspective of neuroscience instead of how typically other youtubers explain it from philosophical perspective. I would say cover other brain related topics from the same perspective(What's the mechanism at lower level?) as you have right now.

    • @jackjohns232
      @jackjohns232 Рік тому

      how can you claim social space isn't physical? You're describing physical objects which are defined by their physical behaviour and their physical interactions with other physical behaviours
      cognitive bias in scientism is out of control

  • @geraldkenneth119
    @geraldkenneth119 Рік тому +2

    It’s fascinating how one mechanism in the brain has evolved to generalize to various other abstractly-similar tasks. It makes me wonder if that same “spatial circuitry” might be used in the recall of long-term memory

  • @tomasvasutiu2275
    @tomasvasutiu2275 Рік тому +45

    Thank you for bringing these interesting topics to us in such an easy to understand way, while presenting the information so clearly. I love learning about the brain, psychology and all things related, so discovering your channel has been a great christmas gift. Given this quality, there is no doubt you’ll make it far. Cheers

  • @jordanfarr3157
    @jordanfarr3157 Рік тому +12

    This is one of the most underappreciated channels out there. When I share videos from here with friends and family, I pitch it with: "The generation under me is so ridiculously smart and I feel like a complete failure". Just spectacular visualizations on extremely advanced topics.

    • @bettyboop5454
      @bettyboop5454 10 місяців тому

      99K followers. Is NOT an underappreciated channel, is just on the beggining of the path with destination to the moon.

    • @jordanfarr3157
      @jordanfarr3157 10 місяців тому

      @@bettyboop5454 6 months ago when I left the comment, 90k followers looked to be a far cry.

  • @leonardoalvarez1749
    @leonardoalvarez1749 Рік тому +16

    Great video! I was a bit confused about a neuron that can represent both the position of another bat and the position of itself. In the paper (David B. Omer et al) was a bit clearer; it is not a single neuron, but a “subpopulation of neurons”.
    It just makes more sense to me that a population of neurons can have a dual role.

    • @victoroyervides6913
      @victoroyervides6913 2 місяці тому

      I do think it’s also okay to think of them as single Neurons like Artem said, because such neurons are only Task dependent, as such based on the context they perform either of two functions, that of place cell or social cell, so they are more of a generalist type of neuron when it comes to function itself.

  • @AlvaroALorite
    @AlvaroALorite Рік тому +3

    I'll say it time and time again, i'm in LOVE with this channel

  • @algoak239
    @algoak239 Рік тому +6

    I compliment you for every video you upload, they are on the podium of the most stimulating contents on the youtube platform, very high quality

  • @TC-nr1nl
    @TC-nr1nl Рік тому +3

    Thank you so much Artem for making these videos. They are so informative and interesting. Surprised you don’t have more followers. Keep up the excellent work!

  • @chipsi21
    @chipsi21 Рік тому +1

    This to me is beyond amazing. Thank you so much for creating these videos!

  • @joeystenbeck6697
    @joeystenbeck6697 Рік тому +7

    Phenomenal. Been curious about neurons sharing responsibilities. I’m coming from comp. sci. and wanting to dig into what hidden layers mean. This perfectly walks through a real world example, has great explanations, and those visuals… they’re really top notch. And synthesizes research. Great job!!

    • @user-hy6cp6xp9f
      @user-hy6cp6xp9f Рік тому +1

      This is super interesting. Care to share what you’ve learned so far?

    • @joeystenbeck6697
      @joeystenbeck6697 Рік тому +2

      ​@@user-hy6cp6xp9f Sure! There's so much to talk about lol. But here's bullet points:
      - current language models are incapable of abstraction-- they just learn to mimic human speech based on a vast corpus
      - current language models (that I know of) are restricted by:
      - rectangular architecture (idk if that's a term). What I mean is they usually have some number of layers and some number of neurons per layer. So it looks like a rectangle
      - this is bad because it tries to fit a conceptual space that is probably not best represented by a rectangle into a rectangle.
      - related is, the network can't change its architecture. It can't create new neurons or connections, even if it needs to.
      - this is bad because gradient descent, which is used almost universally in machine learning, "greedily" adjusts the network to make it learn. Greedily means that it makes the adjustment to its weights for whatever is best *in the moment*. This can lead to premature convergence and in a complex domain means being almost guaranteed to not reach the global minima, let alone efficiently
      - our brains are plastic and can create new connections and neurons. They are also graph-based. This removes those restrictions. It removes the premature convergence because we can always get out of it by creating new connections/neurons. And we can always simplify/reduce/generalize by pruning or rewiring. There's no limitations, aside from our biology and spatial limitations of like... one neuron from one part of our brain connecting to one all the way across. I'm not a biologist but I have a hunch that isn't very biologically feasible, and if it is, efficient lol.
      - removing those restrictions also means we can reorganize and our neurons can form more modular structures. I have no basis for this, but I think whenever we use analogies or apply knowledge from one area to another (e.g. from calculus in a math class to calculus in a physics or engineering class), we could be reusing the same structure from the original area. Related is the concept isomorphism. I'm not a mathematician but my layman's understanding of it is "equivalent structure", so if two domains have equivalent conceptual structures for a part of each domain, then they can share the same structure to represent those parts of each domain.
      - something that could go against this is, what happens if both contexts are activated at once? Our brains maintain state (comp. sci. term) and so reusing the same state for both at once could lead to bad output, I think. I could be wrong. Maybe our brains could inhibit inputs to where only one domain uses that graph at once, and maintains the context of pure math versus engineering/physics elsewhere in the network.
      - just a tidbit, but I like to this of this as Obsidian (the note-taking software) versus a hierarchical app that's folder based. In that Obsidian is graph-based and folder based inherently restricts your ability to organize and connect ideas because something might fall under multiple categories. Analogous to this is a neuron being restricted to interact with a certain set of other neurons in a rectangular architecture. Bc a rectangular architecture can't have neurogenesis (that would break the architecture and make it need to be completely retrained).
      I have more to write but gonna leave this here for what I wrote so far.

    • @user-hy6cp6xp9f
      @user-hy6cp6xp9f Рік тому +1

      @@joeystenbeck6697 this is interesting, i will have more to say later. I wonder if you’re stretching the analogy between neural network architecture and actual biological brains. This made me think of analog computers, and how they might make a comeback for machine learning.

    • @user-hy6cp6xp9f
      @user-hy6cp6xp9f Рік тому +1

      I tend to think about ML in the same way as advanced regression models, as someone with an economics background. I know there is overlap, but I feel like the non-biological architecture of current models is not necessarily why they are limited, you know?

    • @joeystenbeck6697
      @joeystenbeck6697 Рік тому +1

      ​@@user-hy6cp6xp9f Yeah, it's possible analog computers could comeback. I know that it's really hard to control how much electricity there is for a bit. I'm really bound by language here bc idk electrical engineering but I know that bits count as a 1 if they reach a certain electrical threshold and don't if they don't reach it. So like, we're not holding the electricity at an exact level like might be necessary for using analog. I don't know much about analog either though so I could totally be wrong.
      Stripping away being biological/metal, I think these are the main differences in current artificial nets vs biological nets:
      - brains take in a set of inputs and either fire or don't, whereas current artificial nets are typically more continuous. E.g. ReLU or Sigmoid.
      - I might be wrong but I think this means that artificial nets can't abstract as well cuz they're forced to abstract continuously, whereas it might be more accurate or more efficient to abstract as yes-or-no/binary. That contrasts with the ReLU function, sigmoid, etc. which all assume a continuous output at some point. I think we may be using this instead of a binary activation function because a binary activation function is very limited in what it can represent, and so the network, like our brains, would need to create/kill neurons. Binary activation functions are usually only used for the output layer, not in the hidden layers (from my understanding, could be wrong).
      - humans go through the sensorimotor stage of development when we're super young and so we interact with the 3D physical space we experience. Computers don't get that, so they don't have the intuition of 3D objects. This is similar to an LSTM telling the net that the data has a sequential/temporal relationship. Just with common constructs we see, like seeing a cat, or the shapes that make up a cat, or the legs of a cat. Which, it turns out other animals have legs. So, that's an important abstraction we have, that a network might not pick up. I'm sure there are ways around it and maybe they're in use and I'm just unaware of it. I think that we are able to internalize spatial intuition bc we've interacted with the world so much, and that we rely on that when doing thought experiments, e.g. . Which would explain why most of us can't think easily in 4D space.
      - humans have rehearse-and-decay mechanisms to retain information that is important. "CogLTX: Applying BERT to Long Texts" (Ding, Tang, Yang, Zhou) talks some about this and about other cognitive psychology concepts being applied in natural language processing.
      - brains have different structures in them, e.g. Broca's area and Wernicke's area. Related is graph pattern producing networks (GPPNs) which locally build structures that they're encoded to do. This is analogous to DNA. My message deleted when I added a link but I learned about GPPNs and self-repairing/self-building networks from Yannic Kilcher's video on UA-cam named "The Future of AI is Self-Organizing and Self-Assembling (w/ Prof. Sebastian Risi)". P.S. they built 3D self-building/self-repairing structures in Minecraft which imo is pretty sick. 23:15 is a time stamp for that.
      - brains have state. They track recent inputs and pass that around through neurons in real time and can loop back around to each other.
      - artificial nets don't have state. They can be seen as functions in math. You give it an input, you get the same output every time. Unless you add randomness in. LSTM is kinda an exception to this but not really. LSTM looks at inputs as a sequence of data, e.g. our words. They're temporally sensitive. They maintain some info on previous words in their "hidden state" that they pass on when sequentially processing inputs, but it's super limited bc you're forced to put like all of the previous words/grammar/semantics into a hidden state that isn't very large. So it has to drop a lot of info and doesn't work well with long sequences. I think our brains get around that by kinda storing state in a distributed way throughout our brain when our neurons are firing. And our neurons in our brain can loop back to each other.
      - neurotransmitters. Ngl I know like nothing about them but I know that they are essential in transmitting info around the brain, and inhibiting/exciting neurons( or inputs to neurons? Idk). Artificial nets can have vanishing gradients and exploding gradients. I think exploding gradients is analogous to a seizure in that neurons excite each other more and more. In artificial nets, this is typically only in one direction and also is able to explode bc activation functions are typically not binary. In our brains, the output is binary (with some weight attached between the current neuron firing and the next ones, I think. Could be wrong) so it wouldn't have the issue in that way, but I think lack of inhibition would cause it. I'm really talking out of my domain rn tho so take that with a ton of salt.

  • @zahraboudidah9334
    @zahraboudidah9334 Рік тому +5

    You, sir, are gonna save my neuroscience course performance. I thank you most profusely 🤣🙏

  • @unkind6070
    @unkind6070 6 місяців тому

    This channel is extremely unique i love it so much ❤️❤️

  • @fabiopakk
    @fabiopakk Рік тому

    Another fantastic job, Artem!

  • @ChildishBerbino
    @ChildishBerbino Рік тому +1

    This is going to be a wild ride. I have never been able to work out family trees and couldn't even come close to solving the first (I'm guessing easy) problem :o

  • @arasharfa
    @arasharfa Рік тому +1

    I relate to your way of thinking very much!

  • @gabrielchristiannegre2715
    @gabrielchristiannegre2715 Рік тому +1

    Amazing! Thank you for the video!

  • @filipdroszcz1073
    @filipdroszcz1073 Рік тому

    Beautiful work

  • @generaltheory
    @generaltheory Рік тому

    outstanding content, bro. a fan from the first video

  • @GabrielLima-gh2we
    @GabrielLima-gh2we Рік тому

    Very good video, really interesting topic. Keep up the work!

  • @bpath60
    @bpath60 10 місяців тому

    Brilliant videos Thank you

  • @defnd3r
    @defnd3r Рік тому

    Happy you're back - I was worried there for a moment.

    • @ArtemKirsanov
      @ArtemKirsanov  Рік тому +2

      Yeah, sorry about that. Everything is okay ;)
      I was just really busy with preparing all the graduate school applications

  • @josephrohrbach1588
    @josephrohrbach1588 Рік тому +10

    Really excellent video! Takes you through the "scientific" thinking involved whilst being clear, unpatronizing, and engaging without being sensationalized. Well-researched and gorgeously illustrated.
    You raise an interesting question about language. Might be worth looking into how non-Indo-European languages do this, though. It's easy to see similarities between directional social terms in different IE languages, but that doesn't show a non-trivial common influence. I only have relatively limited knowledge of non-IE languages, sadly. I do know there are at least a good few non-directional social terms in Classical Nahuatl, but that's not especially useful on its own. It'd take a systematic study of a variety of unrelated languages to establish something like that robustly, I think.
    Interesting stuff!

    • @ArtemKirsanov
      @ArtemKirsanov  Рік тому +4

      Thank you!
      This sounds really interesting indeed!
      On a related note, it reminds me that certain languages operate with allocentric reference frames in everyday communication ("north", "south") rather than egocentric ("left" / "right") ( www.researchgate.net/publication/11242044_Language_and_spatial_frames_of_reference_in_mind_and_brain )
      I wonder if this is somehow reflected in their vocabulary about social terms

    • @josephrohrbach1588
      @josephrohrbach1588 Рік тому +1

      @@ArtemKirsanov Very interesting indeed. Classical Nahuatl's directional terms are generally agglutinative. That is to say, they are tacked on the end of another word. For instance, -tlan means "by, among, in, near" (among other things). Thus it combines with "tlaxcalli" (maize tortilla) to make the toponym Tlaxcallān, "Place of the Tortillas". (That's a real city, if you didn't know.) A particularly interesting combination.
      One important Nahuatl suffix is -tzin, meaning "honoured, revered", but also "small". It seems the second meaning is actually the older one. I wonder how it evolved that way!

    • @user-hy6cp6xp9f
      @user-hy6cp6xp9f Рік тому +3

      I can speak Mandarin, and from what I can tell there is little difference in how people talk about social hierarchy and relations, at least from the perspective of prepositions, etc.

  • @GenRicOpekc
    @GenRicOpekc 10 місяців тому

    Amazing video.

  • @a.c.6416
    @a.c.6416 Рік тому +2

    Now that's a great Christmas!

  • @tmvkrpxl0135
    @tmvkrpxl0135 Рік тому +2

    If hippocampus can handle social maps, can it also handle other abstracted spacial tasks? Such as planning out network between manufacturing sites and consuming sites in business, or flow of data in computer?

  • @Heter95
    @Heter95 Рік тому

    Great content!

  • @chessthoughts
    @chessthoughts Рік тому

    This channel is awesome sauce

  • @nonfictionshow
    @nonfictionshow Рік тому

    Great video!

  • @timothytyree5211
    @timothytyree5211 Рік тому

    This is a really interesting video, Artem! Right up my alley.
    Could you do a hypothesis video on how the social concept cells you mentioned
    might result from a broader picture of how you think the hippocampus could function?
    I'm interested in your opinion/perspective on this.
    For some more reading, Quiroga did a bunch of work on concept cells that got a lot of attention [1-2].
    However, I think they're more of a symptom than a mechanism...
    References:
    [1] R. Q. Quiroga, et al. Nature. 435, 1102-1107 (2005).
    [2] R. Quian Quiroga, et al. Curr. Biol. 19, 1308-1313 (2009).

  • @orbismworldbuilding8428
    @orbismworldbuilding8428 Рік тому

    Ooooo boy we're getting into abstract visual thinking

  • @bungalowjuice7225
    @bungalowjuice7225 Рік тому +1

    You're great at this. Do you teach professionally?
    This phenomenon is what I "use" when I see a literal map in my brain where everyone has a "pawn" on said map.

  • @user-kp8pv6fs
    @user-kp8pv6fs Рік тому

    very good job!

  • @aytaf5430
    @aytaf5430 Рік тому

    I was amazed by the ability of certain elders in black in my village to find all the possible relations a group of people may share and the closest one. believe me when we are talking about 200 people who basically are isolated and only marry each other and no parallel generations, it is not trivial at all. I tried to learn that but give up.
    Mind you my Spatial Intelligence and Logical-mathematical intelligence are very high. but still takes time to process all possible routes.

  • @PrintflexHealth
    @PrintflexHealth Рік тому

    super interesting ty very much :)

  • @stevenschilizzi4104
    @stevenschilizzi4104 Рік тому

    Fascinating! How far are we in building a synthetic hippocampus, and will that end up being an input towards AGI one day?

  • @alterego2421
    @alterego2421 Рік тому

    damn nice video crazy that you just have 50k subs

  • @karimhabashy803
    @karimhabashy803 Рік тому +3

    do you think this extends to language as well? On one end, hippocampus is active in consilidating spatial memories, and as you explain in this video, has implications in social space. On the other, it is also involved in navigation, and path integration. Do you think this might be analogous to navigation from a language perspective (where we "navigate" through arguments and ideas to communicate)?

    • @arasharfa
      @arasharfa Рік тому

      Yes, just in the same way words have virtual meaning that changes depending on where in a sentence they're placed. we live in hypnosis updated with measurements from the outside through our senses

    • @BigHenFor
      @BigHenFor Рік тому +2

      That's why a synonym for argument is position, as we inherently map any environment by modelling it relative to our position whether that is physical or intellectual. This is why theories are models, which are either descriptive maps and/or predictive computations that begin at a purposively chosen point as a position. This position is the strength and weakness of theory, because positions are abstractions that do not capture the totality of reality. Like maps or computations, they only include data or information relevant to having the theory make sense. This internal logic does not mean that the theory is true or realistic. Moreover, our thinking as a species is not purely logical, as this is the latest evolution of our brains. and the older brain as our survival mechanism can override logic through instinct and emotion. Therefore, a healthy scepticism may be as important with our own positions as with those of others.

    • @ArtemKirsanov
      @ArtemKirsanov  Рік тому +2

      Totally! I believe that words and language are connected to abstract concepts our brains inherently operate with, which are represented in the hippocampus. And to some extent this is present in rodents as well ( www.pnas.org/doi/10.1073/pnas.0701106104 )
      So maybe the language is a way access the navigation in the concept space 🤔

    • @joeystenbeck6697
      @joeystenbeck6697 Рік тому

      @@ArtemKirsanov ​ I wonder how this relates to learning language to a native level becoming harder with age. I’d think if it’s entirely built-in then learning a new language would be easy bc we’re just attaching labels to existing logic in our brain. Related is I’m wondering about labels for concepts being seen as the concepts themselves versus labels being seen as just ways to access it, kinda like keys in a hash map. If someone learns two languages when young, will their neurons form patterns about the logic itself and separate labels as distinct? And if someone learns an additional language as an adult, it’s mapping labels from the additional language to labels from the native language? Bc the person wouldn’t have made the distinction when it was critical if they only learned one language? Maybe it’s not as cut and dry given the spectrum of specializing and generalizing but I wonder if/how much that distinction influences neural connections

    • @user-hy6cp6xp9f
      @user-hy6cp6xp9f Рік тому +1

      @@joeystenbeck6697 I’ve learned a few languages and want to contribute some potential insight. I think that when adults begin learning languages, we tend to associate labels in foreign languages with labels in our own. However, at a certain point your brain stops referencing your native labels, as you attain fluency. Before the flow may have been concept > English word > Chinese equivalent > Speech, but now it simply goes from concept to chinese speech.
      Keep in mind kids do this too when they learn foreign languages, albeit much faster.
      I think that producing language is separate from conceptual reasoning, and maybe the connection is just kids have more plastic brains in general. There are things adults are good at learning that kids are not, however, so I wonder if there is actually a connection with aging besides plasticity

  • @lucaferlisi2486
    @lucaferlisi2486 21 день тому

    This Is Weird. Need more videos about it

  • @amberd0g6
    @amberd0g6 Рік тому

    Amazing work! Greetings from a fellow hippocampal researcher from UC Berkeley!

  • @connormcgee4711
    @connormcgee4711 Рік тому +1

    10:48 I'm not familiar with this field so forgive me if I am wrong, but does anyone find the areas associated with high neuron firing interesting? For the inanimate object, it is at the beginning and end, whereas for the bat, it is at the end but also a turning point.

  • @joegriffith1683
    @joegriffith1683 Рік тому

    I wonder how some agent's social map encodes the relationships between different actors, for example the relationship between my father and my friend. Both the dimensions 'power' and 'affinity' are inherently with respect to the agent, so relative spatial information between two different actors in this space, wouldn't be able to model their relationship.
    Perhaps a space with less agent specific dimensions would be necessary in order to model the social dynamics of all an agent's known actors.

  • @ChaoticNeutralMatt
    @ChaoticNeutralMatt Рік тому

    I had to think about the relationship far longer than the time allowed :P

  • @keithpeterson4005
    @keithpeterson4005 Рік тому

    I wonder if the same thing happens when I do category theory,...

  • @noahwilliams8996
    @noahwilliams8996 Рік тому +2

    "Bob is married to Alice" about fucking time.

    • @cyancoyote7366
      @cyancoyote7366 Рік тому +3

      At last, a happy ending after all those different methods of communication they tried, they must have found the most reliable and secure one.

  • @bettyboop5454
    @bettyboop5454 10 місяців тому

    Brain performance: Gut 🕷Brain performance after seeing this video: it broke the scale 🦋

  • @ToniMorton
    @ToniMorton Рік тому

    mates, territorial elements most likely is the reason for location cells to develop this function

  • @omridrori3286
    @omridrori3286 Рік тому

    It is so interestinggggf

    • @omridrori3286
      @omridrori3286 Рік тому

      How can i contribute to you so you will continue with these amaizing videos?

  • @petevenuti7355
    @petevenuti7355 Рік тому

    What about people who can't visualize? Those who think only with words and can't even imagine a spatial pattern... Is that hippocampal damage, or is it more like internal blindsight ? Something there but prevented from reaching the conscience for some reason??
    And why am I seeing such manor of perception as damage and can't seem to imagine it as just a different way to do things?
    If it is as common as I am told, how do people who think that way manage to navigate the world?

  • @benjaminshatokhin4725
    @benjaminshatokhin4725 Рік тому +1

    this reminds me. gpt-3 (revolutionary chat engine that can write code among other things) is incapable of doing basic family relations, e.g. father, uncle etc. you can try it out if you have an account.

    • @joeystenbeck6697
      @joeystenbeck6697 Рік тому +2

      I’m not an expert but have a pretty good understanding of how the new language models work.
      GPT-3 and ChatGPT train to create sentences based on human responses. They don’t really abstract or understand what’s going on, and are just incredible at mimicry.
      If I understand correctly, they can generalize speech across situations (e.g. “I have a cat” => “I have a ___”) but there’s still no understanding of what those words mean- it’s probabilistic based on what data it was given.
      So, it really has no neurons for *any* “concepts”, it just is trained to respond to different inputs in different ways. That’s why if you ask it to do a niche task, it flops.
      Could be wrong.
      Edit: and maybe we’ll get to the point of language models creating their own abstractions soon!

    • @drdca8263
      @drdca8263 Рік тому

      @@joeystenbeck6697 I think the “it is probabilistic based on the information given” says/implies less than you think it does?

  • @444haluk
    @444haluk Рік тому +1

    Everything is a map because we draw EVERYTHİNG like this, including logic.

    • @alex15785
      @alex15785 Рік тому

      Fully agreed! Are there any articles about this concept?

  • @XanderGouws
    @XanderGouws Рік тому +2

    Fascinating that the structures our brain uses to encode physical space are also involved in encoding social space. You mentioned that the position of objects wasn't associated with hippocampal activity; where would that information be encoded then?
    I'm also curious why changes in absolute social distance weren't correlated with hippocampal activity. Any clues?
    Again, great video!

    • @ArtemKirsanov
      @ArtemKirsanov  Рік тому +1

      Thank you!
      Well, I didn't mention it in the video, but position of the objects was encoded in the hippocampus as well ("object place cells" so to speak), but it's just that these cells differed from the ones representing other bats. Take a look at the figure 4 in the paper (here's the full text link: www.weizmann.ac.il/brain-sciences/labs/ulanovsky/sites/neurobiology.labs.ulanovsky/files/uploads/omer_etal_science2018.pdf)
      For example, Cell 361 (left panel) clearly codes for the position of the ball, but the place field is located in a different position compared to the conspecific representation.
      I'm not sure about the social distance. It could be that fMRI after all is a method with a poor spatial resolution --- you can't measure the response of individual neurons. So it is still a mystery whether such "social cells" even exists in the human brain 🤔

    • @Anonymous-df8it
      @Anonymous-df8it Рік тому

      @@ArtemKirsanov Doesn't this mean that humans only map social space in one dimension (arctan(affiliation/power))?

  • @tdoge
    @tdoge Рік тому

    I don’t quite understand what the point of R, or “social distance” is for, doesn’t make sense that gaining affiliation would increase that person’s social distance with you?

    • @user-hy6cp6xp9f
      @user-hy6cp6xp9f Рік тому

      I’m curious about this too

    • @goclbert
      @goclbert Рік тому

      Artem based his diagram on the one in Schafer and Schiller (2018) which referenced the work of Tavares et al (2015). But when you look at Tavares, you see that affiliation is bounded above at zero. It's just a mistake in the graphic, "you" at 12:57 should be all the way at the middle right and the axes should be on the right.

  • @evennot
    @evennot Рік тому

    Fascinating.
    BTW, I was always baffled how completely unrelated languages have similar homonyms, as if there was an underlying biological reason. Like "right" as a direction and "right" as true. Looks like it can be the case.

    • @didack1419
      @didack1419 Рік тому

      I don't think there's an underlying biological reason that represents the semantics of "right" within people's minds. We can't forget that convergent cultural evolution happens.

    • @evennot
      @evennot Рік тому

      ​@@didack1419 convergent cultural evolution is the default answer (mine too). But it's like saying "it just happened to be this way".
      So if there will be other explanation, it would be nice

    • @didack1419
      @didack1419 Рік тому

      @@evennot well, it's not that convergent cultural evolution "just happened to be this way" anymore than "convergent biological evolution just happened to be this way", when there's convergence there are reasons for it.

  • @Anonymous-df8it
    @Anonymous-df8it Рік тому

    If humans only use two axes for social information, isn't that wasteful as humans can process three spatial dimensions? Also, distance from the center of the social space isn't necessarily 'closeness' (e.g., if all of the distance goes towards the power axis or towards the negative affection axis)?!
    Also, how would conditions like autism impact the perception of social spaces?

  • @EduNauta95
    @EduNauta95 Рік тому

    So judging by your video, one could say that traumatic relationships with someone are represented like some sort of inmovable high voltage knot somewhere in the hippocampus

  • @joegriffith1683
    @joegriffith1683 Рік тому

    So apparently I was wrong with "step-husband" 0:15

  • @permutemusic
    @permutemusic Рік тому

    man i took too many bong hits i read that family tree upside down amd came to some very different conclusions...

  • @luizz_k
    @luizz_k 9 місяців тому

    TF? im pretty sure I went back the same way we got to alex, didnt jump to bob straight away

  • @ThomasAndersonPhD
    @ThomasAndersonPhD Рік тому +1

    The opening example isn't very good. It is trivially easy to explain the logic.
    The sibling of one's spouse is one's sibling-in-law by definition.
    Since Alex is the sibling of Bob's spouse Alice, Alex is Bob's sibling-in-law.
    Since Alex is male, sibling-in-law becomes brother-in-law.
    Very straightforward.
    It would be the same thing if you asked about a relation like nephew or daughter.
    Words have definitions. These words are defined by their relations.

    • @ArtemKirsanov
      @ArtemKirsanov  Рік тому

      Well, yeah, exactly, this is one example of tracking abstract relations -- what is thought to be the general function of the hippocampus (for example elifesciences.org/articles/17086 ). And the social information is just one kind of this "relational database".
      So I don't really see any conflict there

  • @prithwirajmaity4786
    @prithwirajmaity4786 Рік тому

    What software you animate?

    • @ArtemKirsanov
      @ArtemKirsanov  Рік тому +2

      Adobe After Effects for the majority of animations + Blender for 3D scenes (hippocampus, bats flying)

  • @alex15785
    @alex15785 Рік тому +2

    Не только социальные взаимодействия, но и, судя по всему, вообще все логическое мышление человека работает как псевдофизическое пространство. Связи даже между абстрактными понятниями аналогичны физическим взаиморасположениям: "над законом", "в любви", "операции над представлениями". Если происходит нарушения способности ориентации во внутренем псевдопространстве, возникает семантическая афазия и человек не может думать сложные логические мысли и грамматические конструкции.

  • @Generouslife153
    @Generouslife153 Рік тому

    Did anyone else think that Alice and Alex might have different mothers and legally are referred to as half brother/sister in law.

  • @vit3060
    @vit3060 Рік тому

    Артём, делай голос за кадром. Это смотрится(воспринимается, понимается) гораздо лучше, чем какой-то сидящий пред камерой чел, активно размахивающий во все стороны руками )) На диктора надо учиться, особоенно на теледиктора, и не всякий, кто осилил нейронауку, автоматически становится хорошим ведущим. Про то, что информативность видеоряда становится в моменты появления "диктора" равна нулю, надеюсь, понятно и так. Надо уметь усмирять своё самолюбие и самолюбование.

  • @generaltheory
    @generaltheory Рік тому +2

    tangents are important. it appears as if the answer to quantum gravity lies in polar-cartesian transformation with space-half-space compressions and tangents for physical length-information(-much of brain) mapping. at least i've "gravitized the quantum" (of computational time) in my framework that leads directly to the geometry of the universe, with code and visualizations (of these formulas producing our actual visual fields and antomies). Yes, really.

  • @codyx8233
    @codyx8233 Рік тому

    'promosm' 😃

  • @googleyoutubechannel8554
    @googleyoutubechannel8554 9 місяців тому

    Having a spectrum of 'conjoint' cells is clear evidence that this claimed framework, that labels cells as 'social' or 'spatial' is fundamentally missing the dynamics of a useful classification system. You could almost make up any two arbitrary labels that might classify neurons and you'd see this same division, it's evidence of a poor model.

  • @erroroftheworld6927
    @erroroftheworld6927 Рік тому

    Всё хорошо, конечно, но для начала бы понимать, как и за что конкретно нейроны вообще могут быть ответственны в вопросе положения в пространстве или же положения в родстве/иерархии для конкретного индивида.
    Интересно ещё, в таком случае, можно ли предполагать, что абстракции более сложные, чем социально родство строятся в том числе и на абстракциях этого социального родства. А если так, то стоит ли выделять именно социальное родство?

    • @alex15785
      @alex15785 Рік тому

      По сути вопрос в том, возможно ли нейронами положения (с их рецептивными "пятнами") закодировать иерархический граф.

    • @Anonymous-df8it
      @Anonymous-df8it Рік тому

      Stop speaking Putin!

    • @Anonymous-df8it
      @Anonymous-df8it Рік тому

      @@alex15785 Stop speaking Putin!

    • @erroroftheworld6927
      @erroroftheworld6927 Рік тому +1

      @@alex15785 Мне почему-то кажется, что прежде социальной иерархии нужно выделить иерархию объектов реального мира вообще и уже потом переходить к общественным отношениям.

    • @erroroftheworld6927
      @erroroftheworld6927 Рік тому +1

      @@Anonymous-df8it Стап биенг а сенофобик стинки дик энд сей сомесинг он зе тим!

  • @LeadingIndicator
    @LeadingIndicator Рік тому

    Um, Bob and Alex are lovers, but nobody talks about it. Bob is what's known as a "side-piece" (which makes Alice a "beard"). Meanwhile, George is a "groomer". Questions?

  • @pichirisu
    @pichirisu Рік тому

    Yeah unfortunately this isn't how I think. This is bunk science and a good example of the failures of logical empiricism/positivism dictating science and creating a scientism instead of a science or scientific method. Sucks that basic psychology is more true than something we throws millions at each year like behavioral neuroscience.