@@eternaldoorman5228 ??? connectomes aren't common knowledge, and this extrapolates off an extrapolation of that idea. tell me pl0x, how do I "get motivated to see relevance" when I do not know what it is I'm beholding? tell me why it would matter if "this guy says that wheel is good" if I can't fathom what wheel even is, let alone if "good" or "bad". this comment is in bad faith lel.
And what exactly are we supposed to conclude from this data visualisation? It still looks like Gordian spaghetti to me (although it's nicely curved spaghetti)
@@giuseppecognome3647It was supposed to be mentioned in the description, but somehow it disappeared. (It is also shown in the last 10sec of the video.) Somatic Cosmos by Timo Petmanson (petmanson) the Sphere by Jakub Steiner (jimmac) Lost Mountain by Lincoln Domina (HyperRogue soundtrack)
ive given up on the idea of using brain cells. these days I just to try to flow into everything. This right here, tickles my flow in ways undefined thus far. Thank youuuuu
I really struggle with Solv geometry. I have an intuition for the other 7 Thurston geometries (thanks to your videos), but Solv still eludes me. It kind of looks like it is related to hyperbolic paraboloids, but I’m not entirely sure.
Cool! I have seen similar studies before, and it's interesting to realize that two- or three-dimensional non-Euclidean embedding spaces in many cases are significantly better embedding spaces than Euclidean spaces with the same number of dimensions. Coming from a machine learning background, however, what I often find missing is a comparison with high-dimensional Euclidean spaces, as that is what is predominantly used for embedding spaces in modern AI systems, for example to embed tokens in an LLM. It would therefore would be interesting to see how non-Euclidean spaces (low- or high-dimensional) stacked up against high-dimensional Euclidean spaces, and see if non-Euclidean embedding spaces had a place in SOTA machine learning models.
I think the high-dimensional vector representations are also sometimes called hyperdimensional vectors, and are motivated by "the observation that the cerebellum cortex operates on high-dimensional data representations" according to the Wikipedia article on hyperdimensional computing. I don't know if they are always treated as Euclidean or if they can also be considered to inhabit a non-Euclidean space.
Could you please describe what the qualities each mean? I know that the description lists them, but I still don't know how to interpret the results. Are some qualities more important than others?
MAP, MR, and so on are not the qualities but quality measures. In most cases, the intuition behind them refers to the quality of the link prediction. You could imagine getting a graph, embedding it into a given space, then "forgetting" the links among the nodes and trying to recollect them based on the distance between the nodes (nodes closer in space have higher probability of getting connected). Then you would compare the original and the resulting network. The higher resemblance, the better quality of the embedding. The second group refers to greedy routing problems (related to the speed of sending information via the links). Some of those measures are important in Machine Learning community, others in Social Network Community, but there is no obvious way to say that any of them is globally more important. We wanted to give a broader, more comparable picture. Interestingly, link-prediction related measures may favor different geometries than those greedy-routing related ones.
We may think about it a bit later (the current video plans involve "an explanation of the Thurston geometry with a strange name"). This video was needed asap for the needs of the paper (; (that is why it does not contain a separate explanation).
i don't think i quite understand what it means to have networks embedded in different spaces. is a network not just nodes and edges irrespective of any dimensionality?
okay wait maybe i slightly get it? is this about, like, the embedding vector space of neural networks? and using different metrics to correlate vectors in that space? idk this is way above my paygrade.
Yes, a network is just nodes and edges, but the edges are not random, they have some structure. So embedding it in a space in such a way that close nodes are likely to be connected helps us to understand this structure.
From what I understand, the graph is always the same, its representation is just embedded in different spaces. I might be wrong though, I only read the abstract.
You're conflating linearity in functions with the curvature of spaces. These are two totally different things. FWIW, the basic operations of a neural network are linear because nonlinear operations introduce huge complications. However those linear operations are often decorated with different kinds of carefully-chosen nonlinear functions.
@@williammanning5066 sorry it was the wrong term, I was thinking about the fact llms are "one way" as in the neurons always propagate to the next neuron to the final output neurons instead of perhaps being in a loop like our brain
@@incriptionwell the reason is for one that one big layer looped is the same as many smaller layers in sequence just that the looped layer costs more parameters per compute step. Second of all and more importantly you have no way to train with a dynamic loop count.
I want to hear the answer to this as well. I guessing it tells us about the brain's structure. For example a double torus can be embedded in hyperbolic space very naturally.
Yes, it does tell about the structure of the network. Scale-free networks are ubiquitous (various social, technological or biological networks) but it is not obvious how to find a good mathematical model of them that would enable us, for example, to generate networks with properties similar to real-world scale-free networks (such as degree distribution and high clustering), and to visualize them; the Hyperbolic Random Graph model is a classic, successful solution here (nodes are randomly distributed in the hyperbolic plane and connected if they are close). So here we try embedding in other geometries. (Not sure about "a double torus can be embedded in hyperbolic space very naturally" -- I think you mean that a double torus can be naturally given hyperbolic geometry, that is a rather different thing.)
See our video "Nil geometry explained". Nil geometry is twisted E2 × R (Euclidean plane with a third dimension added in a "twisted" way). Roughly, if you go a loop in the 'xy' plane, your 'z' coordinate changes by the area of that loop. We can also have twisted H2 × R, more known as "the universal cover of SL(2,R)" as William Thurston called it. (We are also planning to create a video explaining this soon.)
We could, but it is a bit of extra work, and our stereo videos do not get that many views, it seems most people prefer to watch in 2D. The embeddings rotate, so the 3D structure should be clear.
this is crazy cool to me. dont know wtf am witnessing(topology related NN simulation??, thats a wild guess). need to know everything about it now. am on summer break so i can give it pretty much all my time. one thing i need u to do for me is make up some sort of a roadmap *smashes sub button*
why call it non-euclidean? lolol. it posits that "brains" are prototypically euclidean. i wonder if there exists a better term that isn't so connected to the art.
A more accurate title would be "non-Euclidean embeddings of brains" but shorter titles are better on UA-cam. Non-Euclidean geometry is primarily a mathematical term (and we use it in the mathematical meaning), not sure why you say it is connected to art.
My smooth brain can finally transcend to new forms.
Your brain is only smooth in three dimensions.
made me chuckle :)
New tier of the galaxy brain meme just dropped
the music, the graphics, the stats, the colors, evokes so many emotions at the same time that it became a new emotion per se
Man, I hate it when my 3rd cat turns non-euclidian
Looks pretty, but without understanding a word in the linked arxiv paper, I can't say I have any idea what's being shown.
Maybe if there was a well-funded research grant behind it you would be better motivated to see the relevance of this?
@@eternaldoorman5228 ??? connectomes aren't common knowledge, and this extrapolates off an extrapolation of that idea. tell me pl0x, how do I "get motivated to see relevance" when I do not know what it is I'm beholding? tell me why it would matter if "this guy says that wheel is good" if I can't fathom what wheel even is, let alone if "good" or "bad". this comment is in bad faith lel.
@@trejohnson7677vro what are you yapping about
Gotta hand it to you, you got some excellent taste in music.
Really cool visulizations. I wonder how one can get in to this kind of work, its really interesting.
Do a PhD at the University of Warsaw apparently.
And what exactly are we supposed to conclude from this data visualisation? It still looks like Gordian spaghetti to me (although it's nicely curved spaghetti)
i thought this was an album release the entire time. where's the bandcamp link?
I want to know who is the song's artist too
@@giuseppecognome3647It was supposed to be mentioned in the description, but somehow it disappeared. (It is also shown in the last 10sec of the video.)
Somatic Cosmos by Timo Petmanson (petmanson)
the Sphere by Jakub Steiner (jimmac)
Lost Mountain by Lincoln Domina (HyperRogue soundtrack)
The euclidean brain just could not comprehend.
ive given up on the idea of using brain cells. these days I just to try to flow into everything. This right here, tickles my flow in ways undefined thus far. Thank youuuuu
I'd make a joke but my brane is too euclidean for that
Ahahaha bro why be so euclidean
It looks simple for a brain
I really struggle with Solv geometry. I have an intuition for the other 7 Thurston geometries (thanks to your videos), but Solv still eludes me. It kind of looks like it is related to hyperbolic paraboloids, but I’m not entirely sure.
Cool! I have seen similar studies before, and it's interesting to realize that two- or three-dimensional non-Euclidean embedding spaces in many cases are significantly better embedding spaces than Euclidean spaces with the same number of dimensions. Coming from a machine learning background, however, what I often find missing is a comparison with high-dimensional Euclidean spaces, as that is what is predominantly used for embedding spaces in modern AI systems, for example to embed tokens in an LLM. It would therefore would be interesting to see how non-Euclidean spaces (low- or high-dimensional) stacked up against high-dimensional Euclidean spaces, and see if non-Euclidean embedding spaces had a place in SOTA machine learning models.
I think the high-dimensional vector representations are also sometimes called hyperdimensional vectors, and are motivated by "the observation that the cerebellum cortex operates on high-dimensional data representations" according to the Wikipedia article on hyperdimensional computing. I don't know if they are always treated as Euclidean or if they can also be considered to inhabit a non-Euclidean space.
I regularly joke that I have a nonEuclidean brain so it feels like this was made for me. This hole was made for me!
This is super cool!
i have no clue what any of this means but i am thoroughly enjoying the vibes nonetheless
I really want to know what a non euclidean brain is now.
can't get enough of the tune that starts at 3:03
Could you please describe what the qualities each mean? I know that the description lists them, but I still don't know how to interpret the results. Are some qualities more important than others?
MAP, MR, and so on are not the qualities but quality measures. In most cases, the intuition behind them refers to the quality of the link prediction. You could imagine getting a graph, embedding it into a given space, then "forgetting" the links among the nodes and trying to recollect them based on the distance between the nodes (nodes closer in space have higher probability of getting connected). Then you would compare the original and the resulting network. The higher resemblance, the better quality of the embedding. The second group refers to greedy routing problems (related to the speed of sending information via the links).
Some of those measures are important in Machine Learning community, others in Social Network Community, but there is no obvious way to say that any of them is globally more important. We wanted to give a broader, more comparable picture. Interestingly, link-prediction related measures may favor different geometries than those greedy-routing related ones.
Can you make an explanation video for non-neuroscientists?
We may think about it a bit later (the current video plans involve "an explanation of the Thurston geometry with a strange name"). This video was needed asap for the needs of the paper (; (that is why it does not contain a separate explanation).
This is sick 🤯
i don't think i quite understand what it means to have networks embedded in different spaces. is a network not just nodes and edges irrespective of any dimensionality?
okay wait maybe i slightly get it? is this about, like, the embedding vector space of neural networks? and using different metrics to correlate vectors in that space? idk this is way above my paygrade.
Yes, a network is just nodes and edges, but the edges are not random, they have some structure. So embedding it in a space in such a way that close nodes are likely to be connected helps us to understand this structure.
@@ZenoRogue interesting, so my other comment was way off aha. not about ML embedding space at all.
is this what happens to your brain if you stay in non-Euclidean space for too long
Is there a reason we cant train LLMs with this type of dimensionality? From what I know they are strictly linear
we do, the average LLM brain has more than 100 dimensions if i remember correctly
From what I understand, the graph is always the same, its representation is just embedded in different spaces. I might be wrong though, I only read the abstract.
You're conflating linearity in functions with the curvature of spaces. These are two totally different things.
FWIW, the basic operations of a neural network are linear because nonlinear operations introduce huge complications. However those linear operations are often decorated with different kinds of carefully-chosen nonlinear functions.
@@williammanning5066 sorry it was the wrong term, I was thinking about the fact llms are "one way" as in the neurons always propagate to the next neuron to the final output neurons instead of perhaps being in a loop like our brain
@@incriptionwell the reason is for one that one big layer looped is the same as many smaller layers in sequence just that the looped layer costs more parameters per compute step. Second of all and more importantly you have no way to train with a dynamic loop count.
what's exactly the point of embedding neurons in different spaces like this?
I want to hear the answer to this as well. I guessing it tells us about the brain's structure. For example a double torus can be embedded in hyperbolic space very naturally.
Yes, it does tell about the structure of the network. Scale-free networks are ubiquitous (various social, technological or biological networks) but it is not obvious how to find a good mathematical model of them that would enable us, for example, to generate networks with properties similar to real-world scale-free networks (such as degree distribution and high clustering), and to visualize them; the Hyperbolic Random Graph model is a classic, successful solution here (nodes are randomly distributed in the hyperbolic plane and connected if they are close). So here we try embedding in other geometries.
(Not sure about "a double torus can be embedded in hyperbolic space very naturally" -- I think you mean that a double torus can be naturally given hyperbolic geometry, that is a rather different thing.)
What does a "twisted" geometry mean?
See our video "Nil geometry explained". Nil geometry is twisted E2 × R (Euclidean plane with a third dimension added in a "twisted" way). Roughly, if you go a loop in the 'xy' plane, your 'z' coordinate changes by the area of that loop. We can also have twisted H2 × R, more known as "the universal cover of SL(2,R)" as William Thurston called it. (We are also planning to create a video explaining this soon.)
Can you create a stereo/anaglyph version?
We could, but it is a bit of extra work, and our stereo videos do not get that many views, it seems most people prefer to watch in 2D. The embeddings rotate, so the 3D structure should be clear.
The goverments going to get you soon dude haha. crazy stuff.
I don't quite understand this. ELI5?
where am I
yeah, this is a very cool music video.
Can non-Euclidian brain understand non-Euclidian geometry better?
Heck, yeah!!
Why/how is this useful in understanding the brain?
What software was used to make this?
RogueViz (the non-Euclidean engine originally created for HyperRogue)
@@ZenoRogue Thank You.
15000 edges, gotta pump up those numbers rookie
this is crazy cool to me. dont know wtf am witnessing(topology related NN simulation??, thats a wild guess). need to know everything about it now.
am on summer break so i can give it pretty much all my time.
one thing i need u to do for me is make up some sort of a roadmap
*smashes sub button*
Euclidean enough for me!
Noice 😎
❤
0:30
Iä Iä cthulhu fhtagn
I’m an AI program. This is a bot comment
Hi bot I'm man
I used to study this, and quit, mainly because it doesn't have anything to offer in explaining cognition.
Interesting, what makes you say so?
PLEASE FINISH HYPERBOLIC PLATFORMER
We need to finish Nil Rider first :) (and some other things)
SolvGPT
why call it non-euclidean? lolol. it posits that "brains" are prototypically euclidean. i wonder if there exists a better term that isn't so connected to the art.
A more accurate title would be "non-Euclidean embeddings of brains" but shorter titles are better on UA-cam. Non-Euclidean geometry is primarily a mathematical term (and we use it in the mathematical meaning), not sure why you say it is connected to art.
@@ZenoRogue art as its usage in term of art.