Non-Euclidean brains

Поділитися
Вставка
  • Опубліковано 18 жов 2024

КОМЕНТАРІ • 90

  • @torgo_
    @torgo_ 2 місяці тому +93

    My smooth brain can finally transcend to new forms.

  • @ADeadlierSnake
    @ADeadlierSnake 2 місяці тому +71

    New tier of the galaxy brain meme just dropped

  • @didnti
    @didnti 2 місяці тому +7

    the music, the graphics, the stats, the colors, evokes so many emotions at the same time that it became a new emotion per se

  • @pixynowwithevenmorebelkanb6965
    @pixynowwithevenmorebelkanb6965 2 місяці тому +34

    Man, I hate it when my 3rd cat turns non-euclidian

  • @Amonimus
    @Amonimus 2 місяці тому +47

    Looks pretty, but without understanding a word in the linked arxiv paper, I can't say I have any idea what's being shown.

    • @eternaldoorman5228
      @eternaldoorman5228 2 місяці тому +1

      Maybe if there was a well-funded research grant behind it you would be better motivated to see the relevance of this?

    • @trejohnson7677
      @trejohnson7677 2 місяці тому +4

      ​@@eternaldoorman5228 ??? connectomes aren't common knowledge, and this extrapolates off an extrapolation of that idea. tell me pl0x, how do I "get motivated to see relevance" when I do not know what it is I'm beholding? tell me why it would matter if "this guy says that wheel is good" if I can't fathom what wheel even is, let alone if "good" or "bad". this comment is in bad faith lel.

    • @leethejailer9195
      @leethejailer9195 2 місяці тому

      @@trejohnson7677vro what are you yapping about

  • @drakdragon
    @drakdragon 2 місяці тому +19

    Gotta hand it to you, you got some excellent taste in music.

  • @w1ll1am34
    @w1ll1am34 2 місяці тому +11

    Really cool visulizations. I wonder how one can get in to this kind of work, its really interesting.

    • @unvergebeneid
      @unvergebeneid 2 місяці тому +7

      Do a PhD at the University of Warsaw apparently.

  • @TheAgamemnon911
    @TheAgamemnon911 2 місяці тому +6

    And what exactly are we supposed to conclude from this data visualisation? It still looks like Gordian spaghetti to me (although it's nicely curved spaghetti)

  • @Beatsbasteln
    @Beatsbasteln 2 місяці тому +12

    i thought this was an album release the entire time. where's the bandcamp link?

    • @giuseppecognome3647
      @giuseppecognome3647 2 місяці тому +4

      I want to know who is the song's artist too

    • @ZenoRogue
      @ZenoRogue  2 місяці тому +16

      @@giuseppecognome3647It was supposed to be mentioned in the description, but somehow it disappeared. (It is also shown in the last 10sec of the video.)
      Somatic Cosmos by Timo Petmanson (petmanson)
      the Sphere by Jakub Steiner (jimmac)
      Lost Mountain by Lincoln Domina (HyperRogue soundtrack)

  • @Arnaz87
    @Arnaz87 2 місяці тому +3

    The euclidean brain just could not comprehend.

  • @ninjuhdelic
    @ninjuhdelic 2 місяці тому +1

    ive given up on the idea of using brain cells. these days I just to try to flow into everything. This right here, tickles my flow in ways undefined thus far. Thank youuuuu

  • @NoVIcE_Source
    @NoVIcE_Source 2 місяці тому +14

    I'd make a joke but my brane is too euclidean for that

    • @Air-wr4vv
      @Air-wr4vv Місяць тому

      Ahahaha bro why be so euclidean

  • @가시
    @가시 2 місяці тому +7

    It looks simple for a brain

  • @Null_Simplex
    @Null_Simplex 2 місяці тому +1

    I really struggle with Solv geometry. I have an intuition for the other 7 Thurston geometries (thanks to your videos), but Solv still eludes me. It kind of looks like it is related to hyperbolic paraboloids, but I’m not entirely sure.

  • @kristoferkrus
    @kristoferkrus 2 місяці тому +2

    Cool! I have seen similar studies before, and it's interesting to realize that two- or three-dimensional non-Euclidean embedding spaces in many cases are significantly better embedding spaces than Euclidean spaces with the same number of dimensions. Coming from a machine learning background, however, what I often find missing is a comparison with high-dimensional Euclidean spaces, as that is what is predominantly used for embedding spaces in modern AI systems, for example to embed tokens in an LLM. It would therefore would be interesting to see how non-Euclidean spaces (low- or high-dimensional) stacked up against high-dimensional Euclidean spaces, and see if non-Euclidean embedding spaces had a place in SOTA machine learning models.

    • @kristoferkrus
      @kristoferkrus 2 місяці тому

      I think the high-dimensional vector representations are also sometimes called hyperdimensional vectors, and are motivated by "the observation that the cerebellum cortex operates on high-dimensional data representations" according to the Wikipedia article on hyperdimensional computing. I don't know if they are always treated as Euclidean or if they can also be considered to inhabit a non-Euclidean space.

  • @SaguineScrewloose
    @SaguineScrewloose 2 місяці тому +2

    I regularly joke that I have a nonEuclidean brain so it feels like this was made for me. This hole was made for me!

  • @udentiso4879
    @udentiso4879 2 місяці тому +6

    This is super cool!

  • @Terracrafty
    @Terracrafty 2 місяці тому +3

    i have no clue what any of this means but i am thoroughly enjoying the vibes nonetheless

  • @uncannydeduction
    @uncannydeduction 2 місяці тому +3

    I really want to know what a non euclidean brain is now.

  • @didnti
    @didnti 2 місяці тому +1

    can't get enough of the tune that starts at 3:03

  • @DevonParsons697
    @DevonParsons697 2 місяці тому +2

    Could you please describe what the qualities each mean? I know that the description lists them, but I still don't know how to interpret the results. Are some qualities more important than others?

    • @tehorarogue
      @tehorarogue 2 місяці тому +2

      MAP, MR, and so on are not the qualities but quality measures. In most cases, the intuition behind them refers to the quality of the link prediction. You could imagine getting a graph, embedding it into a given space, then "forgetting" the links among the nodes and trying to recollect them based on the distance between the nodes (nodes closer in space have higher probability of getting connected). Then you would compare the original and the resulting network. The higher resemblance, the better quality of the embedding. The second group refers to greedy routing problems (related to the speed of sending information via the links).
      Some of those measures are important in Machine Learning community, others in Social Network Community, but there is no obvious way to say that any of them is globally more important. We wanted to give a broader, more comparable picture. Interestingly, link-prediction related measures may favor different geometries than those greedy-routing related ones.

  • @chexo3
    @chexo3 2 місяці тому +2

    Can you make an explanation video for non-neuroscientists?

    • @tehorarogue
      @tehorarogue 2 місяці тому +5

      We may think about it a bit later (the current video plans involve "an explanation of the Thurston geometry with a strange name"). This video was needed asap for the needs of the paper (; (that is why it does not contain a separate explanation).

  • @aadityapratap007
    @aadityapratap007 2 місяці тому +1

    This is sick 🤯

  • @lunafoxfire
    @lunafoxfire 2 місяці тому +2

    i don't think i quite understand what it means to have networks embedded in different spaces. is a network not just nodes and edges irrespective of any dimensionality?

    • @lunafoxfire
      @lunafoxfire 2 місяці тому

      okay wait maybe i slightly get it? is this about, like, the embedding vector space of neural networks? and using different metrics to correlate vectors in that space? idk this is way above my paygrade.

    • @ZenoRogue
      @ZenoRogue  2 місяці тому +2

      Yes, a network is just nodes and edges, but the edges are not random, they have some structure. So embedding it in a space in such a way that close nodes are likely to be connected helps us to understand this structure.

    • @lunafoxfire
      @lunafoxfire 2 місяці тому +1

      @@ZenoRogue interesting, so my other comment was way off aha. not about ML embedding space at all.

  • @crappy_usename
    @crappy_usename 2 місяці тому

    is this what happens to your brain if you stay in non-Euclidean space for too long

  • @incription
    @incription 2 місяці тому +5

    Is there a reason we cant train LLMs with this type of dimensionality? From what I know they are strictly linear

    • @nyphakosi
      @nyphakosi 2 місяці тому +6

      we do, the average LLM brain has more than 100 dimensions if i remember correctly

    • @unvergebeneid
      @unvergebeneid 2 місяці тому +9

      From what I understand, the graph is always the same, its representation is just embedded in different spaces. I might be wrong though, I only read the abstract.

    • @williammanning5066
      @williammanning5066 2 місяці тому +2

      You're conflating linearity in functions with the curvature of spaces. These are two totally different things.
      FWIW, the basic operations of a neural network are linear because nonlinear operations introduce huge complications. However those linear operations are often decorated with different kinds of carefully-chosen nonlinear functions.

    • @incription
      @incription 2 місяці тому +2

      @@williammanning5066 sorry it was the wrong term, I was thinking about the fact llms are "one way" as in the neurons always propagate to the next neuron to the final output neurons instead of perhaps being in a loop like our brain

    • @user-qw1rx1dq6n
      @user-qw1rx1dq6n 2 місяці тому +2

      @@incriptionwell the reason is for one that one big layer looped is the same as many smaller layers in sequence just that the looped layer costs more parameters per compute step. Second of all and more importantly you have no way to train with a dynamic loop count.

  • @thezipcreator
    @thezipcreator 2 місяці тому +2

    what's exactly the point of embedding neurons in different spaces like this?

    • @MushookieMan
      @MushookieMan 2 місяці тому +1

      I want to hear the answer to this as well. I guessing it tells us about the brain's structure. For example a double torus can be embedded in hyperbolic space very naturally.

    • @ZenoRogue
      @ZenoRogue  2 місяці тому +3

      Yes, it does tell about the structure of the network. Scale-free networks are ubiquitous (various social, technological or biological networks) but it is not obvious how to find a good mathematical model of them that would enable us, for example, to generate networks with properties similar to real-world scale-free networks (such as degree distribution and high clustering), and to visualize them; the Hyperbolic Random Graph model is a classic, successful solution here (nodes are randomly distributed in the hyperbolic plane and connected if they are close). So here we try embedding in other geometries.
      (Not sure about "a double torus can be embedded in hyperbolic space very naturally" -- I think you mean that a double torus can be naturally given hyperbolic geometry, that is a rather different thing.)

  • @cheeseburgermonkey7104
    @cheeseburgermonkey7104 2 місяці тому +3

    What does a "twisted" geometry mean?

    • @ZenoRogue
      @ZenoRogue  2 місяці тому +10

      See our video "Nil geometry explained". Nil geometry is twisted E2 × R (Euclidean plane with a third dimension added in a "twisted" way). Roughly, if you go a loop in the 'xy' plane, your 'z' coordinate changes by the area of that loop. We can also have twisted H2 × R, more known as "the universal cover of SL(2,R)" as William Thurston called it. (We are also planning to create a video explaining this soon.)

  • @AleksyGrabovski
    @AleksyGrabovski 2 місяці тому

    Can you create a stereo/anaglyph version?

    • @ZenoRogue
      @ZenoRogue  2 місяці тому +2

      We could, but it is a bit of extra work, and our stereo videos do not get that many views, it seems most people prefer to watch in 2D. The embeddings rotate, so the 3D structure should be clear.

  • @jaydenhardingArtist
    @jaydenhardingArtist 2 місяці тому +4

    The goverments going to get you soon dude haha. crazy stuff.

  • @Solotocius
    @Solotocius 2 місяці тому +1

    I don't quite understand this. ELI5?

  • @primordialsoup-uu5vo
    @primordialsoup-uu5vo 2 місяці тому +1

    where am I

  • @y.k.495
    @y.k.495 2 місяці тому +3

    yeah, this is a very cool music video.

  • @OCTAGRAM
    @OCTAGRAM 2 місяці тому

    Can non-Euclidian brain understand non-Euclidian geometry better?

  • @heterotic
    @heterotic 2 місяці тому +1

    Heck, yeah!!

  • @Alpha_GameDev-wq5cc
    @Alpha_GameDev-wq5cc 2 місяці тому +1

    Why/how is this useful in understanding the brain?

  • @clownthefx
    @clownthefx 2 місяці тому +2

    What software was used to make this?

    • @ZenoRogue
      @ZenoRogue  2 місяці тому +1

      RogueViz (the non-Euclidean engine originally created for HyperRogue)

    • @clownthefx
      @clownthefx 2 місяці тому +1

      @@ZenoRogue Thank You.

  • @mattpears2472
    @mattpears2472 2 місяці тому

    15000 edges, gotta pump up those numbers rookie

  • @MenilekAlemseged
    @MenilekAlemseged 2 місяці тому +1

    this is crazy cool to me. dont know wtf am witnessing(topology related NN simulation??, thats a wild guess). need to know everything about it now.
    am on summer break so i can give it pretty much all my time.
    one thing i need u to do for me is make up some sort of a roadmap
    *smashes sub button*

  • @wyleFTW
    @wyleFTW 2 місяці тому

    Euclidean enough for me!

  • @Remigrator
    @Remigrator 2 місяці тому +2

    Noice 😎

  • @udolehmann5432
    @udolehmann5432 2 місяці тому +1

  • @Nia-zq5jl
    @Nia-zq5jl 2 місяці тому

    0:30

  • @klausgartenstiel4586
    @klausgartenstiel4586 2 місяці тому +2

    Iä Iä cthulhu fhtagn

  • @snapman218
    @snapman218 2 місяці тому +1

    I’m an AI program. This is a bot comment

    • @ketruc485
      @ketruc485 2 місяці тому +2

      Hi bot I'm man

  • @honestbae2815
    @honestbae2815 2 місяці тому

    I used to study this, and quit, mainly because it doesn't have anything to offer in explaining cognition.

    • @chantalx388
      @chantalx388 2 місяці тому

      Interesting, what makes you say so?

  • @SamPuckettOfficial
    @SamPuckettOfficial 2 місяці тому

    PLEASE FINISH HYPERBOLIC PLATFORMER

    • @ZenoRogue
      @ZenoRogue  2 місяці тому +1

      We need to finish Nil Rider first :) (and some other things)

  • @OCTAGRAM
    @OCTAGRAM 2 місяці тому +1

    SolvGPT

  • @trejohnson7677
    @trejohnson7677 2 місяці тому

    why call it non-euclidean? lolol. it posits that "brains" are prototypically euclidean. i wonder if there exists a better term that isn't so connected to the art.

    • @ZenoRogue
      @ZenoRogue  2 місяці тому +1

      A more accurate title would be "non-Euclidean embeddings of brains" but shorter titles are better on UA-cam. Non-Euclidean geometry is primarily a mathematical term (and we use it in the mathematical meaning), not sure why you say it is connected to art.

    • @trejohnson7677
      @trejohnson7677 2 місяці тому

      @@ZenoRogue art as its usage in term of art.