I can admit that this is the best explanation for GAT and GNN one can find. Fantastic explanation with very simple English. The quality of sound and video is great as well. Many thanks.
This is the best and most in detail explanation on Graph CNN attention I've found. Great job!
Рік тому
Your work has been an absolute game-changer for me! The way you break down complex concepts into understandable and actionable insights is truly commendable. Your dedication to providing in-depth tutorials and explanations has tremendously helped me grasp the intricacies of GNNs. Keep up the phenomenal work!
Muchas gracias por el video. Despues de haber visto muchos otros, puedo decir que el suyo es el mejor, el mas sencillo de entender. Estoy muy agradecido con usted. Saludos
Thank you very much! This was my introduction into GAT and helped me to immediately get a good grasp of the basic concept :) I like the graphical support you provide to the explanation, it's gerat!
This was simply a fantastic explanation video, I really do hope this video gets more coverage than it already has. It would be fantastic if you were to explain the concept of multi-head attention in another video. You've earned yourself a subscriber +1.
A wonderful and succinct explanation with crisp visualisations about both the attention mechanism and the graph neural network. The way the learnable parameters are highlighted along with the intuition (such as a weighted adjacency matrix) and the corresponding matrix operations is very well done.
This is pretty amazing content. The way you explain the concept is pretty great and I especially like the visual style and very neat looking visuals and animations you make. Thank you!
Your visual explanation is super great, help many people to learn some-hour stuff in minutes! Please make more videos on specialized topics of GNNs! Thanks in advance!
Thank you so much for this beautiful video. Have been trying out too many videos on GNN and GAN but this video definitely tops. I finally understood the concept behind it. Keep up the good work :)
Hi! Thanks! Multi-head attention simply means that several attention mechanisms are applied at the same time. It's like cloning the regular attention. What exactly is unclear here? :)
@@DeepFindr The math and code are hard to fully grasp. If you could break down the linear algebra with the matrix diagrams as you have done for single head attention, I think people would find that very helpful.
Just for anyone confused, in accordance to the illustration in the summary the weight matrix should have 5 rows instead of 4 that are shown in the video. Great video and I admire the fact that your topics of choice are really into the latest hot staff of ML!
Good explanation to the key idea. One question, what is the difference between GAT and self attention constrained by a adjacency matrix(eg. Softmax(Attn*Adj) )? The memory used for GAT is D*N^2, which is D times of the intermediate ouput of SA. The node number of graph used in GAT thus cannot be too large because of memory size. But it seems that they both implement dynamic weighting of neighborhood information constrained by a adjacency matrix.
Hi, Did you have a look at the implementation iny PyG? pytorch-geometric.readthedocs.io/en/latest/_modules/torch_geometric/nn/conv/gat_conv.html#GATConv One of the key tricks in GNNs is usually to represent the adjacency matrix in COO format. Therefore you have adjacency lists and not a nxn matrix. Using functions like gather or index_select you can then do a masked selection of the local nodes. Hope this helps :)
Thank you for the great video. I have one question, what happens if weighted graphs are used with attention GNN? Do you think adding the attention-learned edge "weights" will improve the model compared to just having the input edge weights (e.g. training a GCNN with weighted graphs)?
Hi! Yes I think so. The fact that the attention weights are learnable makes them more powerful than just static weights. The model might still want to put more attention on a node, because there is valuable information in the node features, independent of the weight. A real world example of this might be the data traffic between two network nodes. If less data is sent between two nodes, you probably assign a smaller weight to the edge. Still it could be that the information coming from one nodes is very important and therefore the model pays more attention to it.
Thanks a lot! Haha I use active presenter (it's free for the basic version) but I guess there are better alternatives out there. Still experimenting :)
Thanks for the video! There's a question: at 13:03, I think the 'adjacency matrix' consists of {e_ij} could be symmetric, but after the softmax operation, the 'adjacency matrix' consists of {α_ij} should not be symmetric any more. Is that right?
Thanks for the great explanation! Just one thing that I do not really understand, may I ask how do you get the size of the learnable weight matrix [4,8]? I understood that there are 4 rows due to the number of features for each node. However, not sure where the 8 columns come from.
Awesome video! Quick question: do you have a video explaining Cluster-GCN? And if yes, do you know if similar clustering idea can be applied to other networks (like GAT) to be able to train the model on large graphs? Thanks!
Perfect video to understand GATs. However, I guess, you forgot to add sigmoid function when you demonstrate h1' as a sum of multiplications of hi* and attention values, in the last seconds of the video: 13:51
At 11:30, should the denominator have k instead of j? Also, this vector w_a, is it the same vector used for all edges, there isn't a different vector to learn for each node i, right? Thank you!
Yes, they are the same thing :) passing messages is in the end nothing else but multiplying with the adjacency matrix. It's just a common term to better illustrate how the information is shared :)
I have come to understand attention as key, query, value multiplication/addition. Do you know why this wasn't used and if it's appropriate to call it attention?
Hi, Query / Key / Value are just a design choice of the transformer model. Attention is another technique of the architecture. There is also a GNN Transformer (look for Graphormer) that follows the query/key/value pattern. The attention mechanism is detached from this concept and is simply a way to learn importance between embeddings.
Thank you for the great video! I wanted to ask - how is training of this network performed when the instances (input graphs) have varying number of nodes and/or adjacency matrix? It seems that W would not depend on the number of nodes (as its shape is 4 node features x 8 node embeddings) but shape of attention weight matrix Wa would (as its shape is proportional to the number of edges connecting node 1 with its neighbors.)
Hi! The attention weight matrix has always the same shape. The input shape is twice the node embedding size because it always takes two neighbor - combinations and predicts the attention coefficient for them. Of course if you have more connected nodes, you will have more of these combinations, but you can think of it like the batch dimension increases, but not the input dimension. For instance you have node embeddings of size 3. Then the input for the fully connected network is for instance [0.5, 1, 1, 0.6, 2, 1], so the concatenated node embeddings of two neighbors (size=3+3). It doesn't matter how many of these you input into the attention weight matrix. If you have 3 neighbors for a node it would look like this: [0.5, 1, 1, 0.6, 2, 1] [0.5, 1, 1, 0.7, 3, 2] [0.5, 1, 1, 0.8, 4, 3] The output are then 3 attention coefficients for each of the neighbors. Hope this makes sense :)
3 роки тому
@@DeepFindr If graph sizes are already different, I mean if one have graph_1 that has 2200 nodes(that results in 2200,2200 adj. matrix, and graph_2 has 3000 nodes (3000,3000 adj matrix), you can zero pad graph_1 to 3000. This way you'll have fixed size of input for graph_1 and graph_2. Zero padding will create dummy nodes with no connection. So the sum with the neighboring nodes will be 0. And having dummy features for dummy nodes, you'll end up with fixed size graphs.
Hi, yes that's true! But for the attention mechanism used here no fixed graph size is required. It also works for a different number of nodes. But yes padding is a good idea to get the same shapes :)
why would the attention adjacency matrix be symmetrical? If the weight vector is learnable, then it does matter which order the two input vectors are concatenated. It doesn't seem like there would be any reason to enforce symmetry.
Thanks a lot for the excellent tutorial. Just a quick question, when training the single layer attention network, what are the lables of input? How this single layer network is trained?
Thanks! Typically you train it with your custom problem. So the embeddings will be specific to your use-case. For example if you want to classify molecules, then the loss of this classification problem is used to optimize the layer. The labels are then the classes. It is however also possible to train universal embeddings. This can be done by using a distance metric such as cosine distance. The idea is that similar inputs should lead to similar embeddings and the labels would then be the distance between graphs. With both options the weights in the attention layer can be optimized. It is also possible to train GNNs in an unsupervised fashion, there exist different approaches in the literature. Hope this answers the question :)
@@DeepFindr Thanks! Sorry, my question might be confusing. For the node classification task, if we use the distance metrics between nodes as labels to train the weights of attention layer, then I think the attention layer that computes attention coefficient is not needed. Because we can get the importance by computing the distance metrics. I wonder how we can train weights of the shared attentional mechanism. Thanks again!
Yes, you are right. The attention mechanism using the dot product will also lead to similar embeddings for nodes that share the same neighborhood. However the difference is that the attention mechanism is local - it only calculates the attention coefficient for the neighboring nodes. Using the distance as targets can however be applied to all nodes in the input graph. But I agree, the various GNN layers might be differently useful depending on the application.
Thanks a lot. Your videos are really helpful. I have a few questions regarding the case of weighted graphs. Would attention still be useful if the edges are weighted? If so, how to pass edge wights to the attention network? Can you suggest a paper doing that?
The GAT layer of PyG supports edge features but no edge weights. Therefore I would simply treat the weights as one dimensional edge features. The attention then additionally considered these weights. Probably the learned attention weights and the edge weights are sort of correlated, but I think it won't harm to include them for the attention calculation. Maybe the attention mechanism can learn even better scores for the aggregation :) I would just give it a try and see what happens. For example compare RGCN + edge weights with GAT + edge features.
Love your work and thick accent, thank you! These attention coefficients look very similar to weighted edges for me, so I want to ask a question: If my graph is unweighted attributed graph, would GATConv produce different output compared with GCNConv by Kipf and Welling?
hahah, thanks! I'm not sure if I understood the question correctly. If you have an unweighted graph, GAT will anyways learn the attention coefficients (which can be seen as edge weights) based on the embeddings. It can be seen as "learnable" edge weights. So I'm pretty sure that GATConv and GCNConv will produce different outputs. From my experience, using the attention mechanism, the output embeddings are better than using plain GCN.
This simply comes from dense (fully connected layers). There are lots of resources, for example here: analyticsindiamag.com/a-complete-understanding-of-dense-layers-in-neural-networks/#:~:text=The%20dense%20layer's%20neuron%20in,vector%20of%20the%20dense%20layer.
2:55 Looks like it should be sum(H * W) not sum(W * H). 5x4 * 4x8 works.Suggest you provide errata at the top of the description. Someone else has noticed an error later in the video.
Hi hope you're doing well Is there any graph neural network architecture that receives multivariate dataset instead of graph-structured data as an input? I'll be very thankful if you answer me i really nead it Thanks in advanced
Hi! As the name implies, graph neural networks expect graph structured input. Please see my latest videos on how to convert a dataset to a graph. It's not that difficult :)
Very helpful video! Thank you for your great work! Two questions, 1. Could you please explain the Laplacian Matrix in GCN, the GNN explained in this video is spatial-based, and I hope I can get a better understanding of those spectral-based ones. 2. How to draw those beautiful pictures? Could you share the source files? Thanks again!
Hi! The Laplacian is simply the degree matrix of a graph subtracted by the adjacency matrix. Is there anything in particular you are interested in? :) My presentations are typically a mix of PowerPoint and active presenter, so I can send you the slides. For that please send an email to deepfindr@gmail.com :)
Hello ,thanks for sharing, could you plz explain how you get learnable method,is it matrix randomly chosen or there is method behind,and is this equal to lablacian method. One more question ,your embedding only on node level ,right
Hi, the learnable weight matrix is randomly initialized and then updated through back propagation. It's just a classical fully-connected neural network layer. Yes the embedding is on the node level :)
I am following your playlist on GNN and this is the best content I get as of now. I have a CSV file and want to apply GNN on it but I don't understand how to find the edge features from the CSV file
Hi! The video in the description from this other channel explains the general attention mechanism used in transformers quite well :) or do you look for other attention mechanisms in GNNs?
OK :) In my next video (of the current GNN series) I will also Quickly talk about Graph Transformers. There the attention coefficients are calculated with a dot product of keys and queries. I hope to upload this video this or next week :)
That's a good point. I think the TransformerConv is the layer that uses dot product attention. I'm also not aware of any reason why it was implemented like that. Maybe it's because this considers the direction of information (so source and target nodes) better. Dot product is cummutative, so i*j is the same as j*i, so it can't distinguish between the direction of information flow. Just an idea :)
Thanks for your awesome explanation, it's very clear and enlightening. But I have a question about the self-attention mechanism in this paper since it seems not very similar to the method in NLP. When it comes to NLP, the most common method of self-attention would do three times linear transform, which need 3 weight matrices `W_q`, `W_k` and `W_v`. Then it uses the results derived from W_q and W_k to get `a_ij`, which is the attention weight between token i and token j in a sentence. In this paper, it firstly uses `W`, `a` and `two node embedding` to compute `alpha_ij` for each node pairs. Then it uses `W`, `alpha` and `all node embedding` to get `new node embedding`. Is my understanding correct? But I'm curious why the paper don't use different `W` in the two period. For example, we can use 2 weight matrices `W1` and `W2`, when the first `W1` can be used to get `alpha_ij` and the second `W2` can be used to calculate `new node embedding`.
Hi, yes you are right in NLP everything is differentiated with queries, keys and values. This means, for word vectors they apply different transformations depending on the context (input query, key to map against and output value multiplied with attention). In the GAT paper all node vectors are transformed with only one matrix W. So there is no differentiation between q, k and v. Additionally however, the attention coefficients are calculated with a weight vector, which is not done in the transformers model (there it's the dot product). So I would say GAT uses just another flavor of attention and we cannot compare them directly - the idea is the same but the implementation slightly different. I dont know if I understood you correctly, but W is only applied once to transform all nodes. Then there is a second weight vector to calculate a_ij. Also, there are many variants of GNNs - some also do the same separation as its done in NLP. For example if you have no self loops, you usually apply a different matrix for a specific node W_1 and for its neighbors W_2 - we can see this like q and k above. Hope that helps! If not, let me know!
Well it will simply calculate attention weights with all neighbor nodes. So every node attends to all other nodes. Its a bit like the transformer that attends to all words. This paper might also be interesting: arxiv.org/abs/2105.14491
No the weight vector has a fixed size. It is applied to each node feature vector. For example if you have 5 nodes and a feature size of 10, then the weight matrix with 128 neurons could be (10, 128). If you have more nodes, just the batch dimension is bigger. Hope this answers the question :)
@@DeepFindris the generic gnn weighting matrix the same matrix for the entire graph or is it a different matrix for each node but applied to all the neighbours? Also, how does it deal with heterogeneous data where the input feature vectors dimensions are different?
It's because the output dimension (neurons) of the neural network is different then the input dimension. You could also have less or the same number of features.
I can admit that this is the best explanation for GAT and GNN one can find. Fantastic explanation with very simple English. The quality of sound and video is great as well. Many thanks.
Thank you for your kind words
This is the best and most in detail explanation on Graph CNN attention I've found. Great job!
Your work has been an absolute game-changer for me! The way you break down complex concepts into understandable and actionable insights is truly commendable. Your dedication to providing in-depth tutorials and explanations has tremendously helped me grasp the intricacies of GNNs. Keep up the phenomenal work!
Muchas gracias por el video. Despues de haber visto muchos otros, puedo decir que el suyo es el mejor, el mas sencillo de entender. Estoy muy agradecido con usted. Saludos
Thank you! :)
This might be the best and simple explanation of GAT one can ever find! Thanks man
Thank you very much! This was my introduction into GAT and helped me to immediately get a good grasp of the basic concept :) I like the graphical support you provide to the explanation, it's gerat!
This was simply a fantastic explanation video, I really do hope this video gets more coverage than it already has. It would be fantastic if you were to explain the concept of multi-head attention in another video. You've earned yourself a subscriber +1.
Thank you, I appreciate the feedback!
Sure, I note it down :)
amazing!!! author well done!!!
This is the MOST BEST video of GCN and GAT, very great, thank you!
A wonderful and succinct explanation with crisp visualisations about both the attention mechanism and the graph neural network. The way the learnable parameters are highlighted along with the intuition (such as a weighted adjacency matrix) and the corresponding matrix operations is very well done.
very good explanation! clear and crisp, even I, a beginner, feeling satisfied after watching this. Should get more recognition!
Thanks
This is pretty amazing content. The way you explain the concept is pretty great and I especially like the visual style and very neat looking visuals and animations you make. Thank you!
Thank you for your kind words :)
I especially love your background pics.
Explained in terms of basic Neural Network terminologies!! Great work 👍
very well explained, provides a very intuitive picture of the concept. Thanks a ton for this awesome lecture series!
I found it hard to follow initially but after understanding GCNN thoroughly, this video is a gem.
Thanks
Clear explanation and visualization on attention mechanism. Really helpful in studying GNN.
it was the best explanation that gave me hope for the understanding these mechanisms. Everything was so good explained and depicted, thank you!
Extremely helpful. Very well explained in concrete and abstract terms.
Your visual explanation is super great, help many people to learn some-hour stuff in minutes!
Please make more videos on specialized topics of GNNs!
Thanks in advance!
I will soon upload more GNN content :)
Amazingly easy to understand. Thank you.
such an easy-to-grasp explanation! such a visually nice video! amazing job!
Thanks, I appreciate it :)
This is a very great explanation covering basic GNN and the GAT. Thank you so much
clearly clear explanation, super best video lecture about GNN ever seen.
Thank you so much for this beautiful video. Have been trying out too many videos on GNN and GAN but this video definitely tops. I finally understood the concept behind it. Keep up the good work :)
Very well explained. Thank you very much!
I'd love it if you could explain multi-head attention as well. You really have such a good grasp of this very complex subject.
Hi! Thanks!
Multi-head attention simply means that several attention mechanisms are applied at the same time. It's like cloning the regular attention.
What exactly is unclear here? :)
@@DeepFindr The math and code are hard to fully grasp. If you could break down the linear algebra with the matrix diagrams as you have done for single head attention, I think people would find that very helpful.
Great! Thank you for explaining the math and the linear algebra with the simple tables.
very helpful tutorial, clearly explained!
Just for anyone confused, in accordance to the illustration in the summary the weight matrix should have 5 rows instead of 4 that are shown in the video.
Great video and I admire the fact that your topics of choice are really into the latest hot staff of ML!
Great video! your explanation was amazing. Thank you!!
Thanks :)
Good explanation to the key idea. One question, what is the difference between GAT and self attention constrained by a adjacency matrix(eg. Softmax(Attn*Adj) )? The memory used for GAT is D*N^2, which is D times of the intermediate ouput of SA. The node number of graph used in GAT thus cannot be too large because of memory size. But it seems that they both implement dynamic weighting of neighborhood information constrained by a adjacency matrix.
Hi,
Did you have a look at the implementation iny PyG? pytorch-geometric.readthedocs.io/en/latest/_modules/torch_geometric/nn/conv/gat_conv.html#GATConv
One of the key tricks in GNNs is usually to represent the adjacency matrix in COO format. Therefore you have adjacency lists and not a nxn matrix.
Using functions like gather or index_select you can then do a masked selection of the local nodes.
Hope this helps :)
4:00 do you multiply "feature node matrix" with "adjacency matrix" before multiplying it with "learnable weight matrix" ?
I really salute you for this detailed video! that's very intriguing and clear! thank you again!
Thank you for sharing this clear and well-designed explanation.
best video for learning GNN thank you so much!
Thank you for the great video. I have one question, what happens if weighted graphs are used with attention GNN? Do you think adding the attention-learned edge "weights" will improve the model compared to just having the input edge weights (e.g. training a GCNN with weighted graphs)?
Hi! Yes I think so. The fact that the attention weights are learnable makes them more powerful than just static weights.
The model might still want to put more attention on a node, because there is valuable information in the node features, independent of the weight.
A real world example of this might be the data traffic between two network nodes. If less data is sent between two nodes, you probably assign a smaller weight to the edge. Still it could be that the information coming from one nodes is very important and therefore the model pays more attention to it.
Great job mate, keep it up
Very clear explanation. Thank you!
simple and informative! Thank you!
Great explination, really appretiated.
If you Please could u make a videa explain the loss calculation and backpropagation in gnn?
Hi, Can you tell which tool you're using to make those amazing visualizations? All of your videos on GNNs are great btw :)
Thanks a lot! Haha I use active presenter (it's free for the basic version) but I guess there are better alternatives out there. Still experimenting :)
Very clear and helpful. Thank you so much!
most understandable explanation so far!
Very nice video. Thanks for your work~
Thanks for the video! There's a question: at 13:03, I think the 'adjacency matrix' consists of {e_ij} could be symmetric, but after the softmax operation, the 'adjacency matrix' consists of {α_ij} should not be symmetric any more. Is that right?
Yes usually the attention weights do not have to be symmetric. Is that what you mean? :)
@@DeepFindr Yes. Thanks for your reply!
Very Helpful Explanation! Thank you!
Thanks for the great explanation! Just one thing that I do not really understand, may I ask how do you get the size of the learnable weight matrix [4,8]? I understood that there are 4 rows due to the number of features for each node. However, not sure where the 8 columns come from.
I think 8 is the arbitrarily chosen dimensionality of the embedding space.
Thanks for the best explanation.
Awesome video! Quick question: do you have a video explaining Cluster-GCN? And if yes, do you know if similar clustering idea can be applied to other networks (like GAT) to be able to train the model on large graphs? Thanks!
Simply exceptional!
I need more Graph Neural Network related video!!
There will be some more in the future. Anything in particular you are interested in? :)
Thank you bro. Confused head now gets the idea about GNN.
Hehe
easy and best explanation
nice work
Thanks for sharing the knowledge!
You're welcome :)
I learned so much from this video! Thanks a lot
That's great :)
Perfect video to understand GATs. However, I guess, you forgot to add sigmoid function when you demonstrate h1' as a sum of multiplications of hi* and attention values, in the last seconds of the video: 13:51
At 11:30, should the denominator have k instead of j?
Also, this vector w_a, is it the same vector used for all edges, there isn't a different vector to learn for each node i, right? Thank you!
Ohh yeah you are right. Should be k...
Yes its a shared vector, used for all edges. Thank you for the finding!
thank you. what if you also wanted to have edge features?
Hi, I have a video on how to use edge features in GNNs :)
Thx for the awesome explanation!
A video with attention in CNN e.g. UNet would be great :)
I slightly capture that in my video on diffusion models. I've noted it down for the future though.
Excellent job, mate 👍👍
Thx :)
Wonderful explination! thanks
Fantastic explaination.
Thank you so much for this great video.
A great explanation, many thanks
Great walkthrough.
Hi! Are what you explain in the "Basics" and the message-passing concept the same things?
Yes, they are the same thing :) passing messages is in the end nothing else but multiplying with the adjacency matrix. It's just a common term to better illustrate how the information is shared :)
Greta Video, thank you!
Very nice, thanks for effort!
I have come to understand attention as key, query, value multiplication/addition. Do you know why this wasn't used and if it's appropriate to call it attention?
Hi,
Query / Key / Value are just a design choice of the transformer model. Attention is another technique of the architecture.
There is also a GNN Transformer (look for Graphormer) that follows the query/key/value pattern. The attention mechanism is detached from this concept and is simply a way to learn importance between embeddings.
Thank you for the great video! I wanted to ask - how is training of this network performed when the instances (input graphs) have varying number of nodes and/or adjacency matrix? It seems that W would not depend on the number of nodes (as its shape is 4 node features x 8 node embeddings) but shape of attention weight matrix Wa would (as its shape is proportional to the number of edges connecting node 1 with its neighbors.)
Hi! The attention weight matrix has always the same shape. The input shape is twice the node embedding size because it always takes two neighbor - combinations and predicts the attention coefficient for them. Of course if you have more connected nodes, you will have more of these combinations, but you can think of it like the batch dimension increases, but not the input dimension.
For instance you have node embeddings of size 3. Then the input for the fully connected network is for instance [0.5, 1, 1, 0.6, 2, 1], so the concatenated node embeddings of two neighbors (size=3+3). It doesn't matter how many of these you input into the attention weight matrix.
If you have 3 neighbors for a node it would look like this:
[0.5, 1, 1, 0.6, 2, 1]
[0.5, 1, 1, 0.7, 3, 2]
[0.5, 1, 1, 0.8, 4, 3]
The output are then 3 attention coefficients for each of the neighbors.
Hope this makes sense :)
@@DeepFindr If graph sizes are already different, I mean if one have graph_1 that has 2200 nodes(that results in 2200,2200 adj. matrix, and graph_2 has 3000 nodes (3000,3000 adj matrix), you can zero pad graph_1 to 3000. This way you'll have fixed size of input for graph_1 and graph_2. Zero padding will create dummy nodes with no connection. So the sum with the neighboring nodes will be 0. And having dummy features for dummy nodes, you'll end up with fixed size graphs.
Hi, yes that's true! But for the attention mechanism used here no fixed graph size is required. It also works for a different number of nodes.
But yes padding is a good idea to get the same shapes :)
Great quality thank you !
why would the attention adjacency matrix be symmetrical? If the weight vector is learnable, then it does matter which order the two input vectors are concatenated. It doesn't seem like there would be any reason to enforce symmetry.
Excellent explanation 👌 👏🏾
Thanks a lot for the excellent tutorial. Just a quick question, when training the single layer attention network, what are the lables of input? How this single layer network is trained?
Thanks!
Typically you train it with your custom problem. So the embeddings will be specific to your use-case. For example if you want to classify molecules, then the loss of this classification problem is used to optimize the layer. The labels are then the classes.
It is however also possible to train universal embeddings. This can be done by using a distance metric such as cosine distance. The idea is that similar inputs should lead to similar embeddings and the labels would then be the distance between graphs.
With both options the weights in the attention layer can be optimized.
It is also possible to train GNNs in an unsupervised fashion, there exist different approaches in the literature.
Hope this answers the question :)
@@DeepFindr Thanks! Sorry, my question might be confusing. For the node classification task, if we use the distance metrics between nodes as labels to train the weights of attention layer, then I think the attention layer that computes attention coefficient is not needed. Because we can get the importance by computing the distance metrics. I wonder how we can train weights of the shared attentional mechanism. Thanks again!
Yes, you are right. The attention mechanism using the dot product will also lead to similar embeddings for nodes that share the same neighborhood.
However the difference is that the attention mechanism is local - it only calculates the attention coefficient for the neighboring nodes.
Using the distance as targets can however be applied to all nodes in the input graph.
But I agree, the various GNN layers might be differently useful depending on the application.
Got it! Thanks again!
Thanks a lot. Your videos are really helpful. I have a few questions regarding the case of weighted graphs. Would attention still be useful if the edges are weighted? If so, how to pass edge wights to the attention network? Can you suggest a paper doing that?
The GAT layer of PyG supports edge features but no edge weights. Therefore I would simply treat the weights as one dimensional edge features.
The attention then additionally considered these weights.
Probably the learned attention weights and the edge weights are sort of correlated, but I think it won't harm to include them for the attention calculation. Maybe the attention mechanism can learn even better scores for the aggregation :) I would just give it a try and see what happens. For example compare RGCN + edge weights with GAT + edge features.
@@DeepFindr thanks a lot for the reply.
Thank you for wonderful content
Outstanding explanation
Very understandable! Thank you.
Can you share your presentation?
Sure! Can you send me an email to deepfindr@gmail.com and I'll attach it :) thx
@@DeepFindr Hey I have also sent you an email, could you please attach the presentation?
This is very helpful!
Love your work and thick accent, thank you! These attention coefficients look very similar to weighted edges for me, so I want to ask a question: If my graph is unweighted attributed graph, would GATConv produce different output compared with GCNConv by Kipf and Welling?
hahah, thanks!
I'm not sure if I understood the question correctly. If you have an unweighted graph, GAT will anyways learn the attention coefficients (which can be seen as edge weights) based on the embeddings. It can be seen as "learnable" edge weights.
So I'm pretty sure that GATConv and GCNConv will produce different outputs.
From my experience, using the attention mechanism, the output embeddings are better than using plain GCN.
A Great explanation
how is learnable weight matrix is formed ? have some material to understand it better?
This simply comes from dense (fully connected layers). There are lots of resources, for example here: analyticsindiamag.com/a-complete-understanding-of-dense-layers-in-neural-networks/#:~:text=The%20dense%20layer's%20neuron%20in,vector%20of%20the%20dense%20layer.
2:55 Looks like it should be sum(H * W) not sum(W * H). 5x4 * 4x8 works.Suggest you provide errata at the top of the description. Someone else has noticed an error later in the video.
Hi hope you're doing well
Is there any graph neural network architecture that receives multivariate dataset instead of graph-structured data as an input?
I'll be very thankful if you answer me i really nead it
Thanks in advanced
Hi! As the name implies, graph neural networks expect graph structured input. Please see my latest videos on how to convert a dataset to a graph. It's not that difficult :)
@@DeepFindr thanks for prompt response
Sure; I'll see it right now..
Would you please sent its link?
ua-cam.com/video/AQU3akndun4/v-deo.html
Great video! Thank you
Very helpful video! Thank you for your great work! Two questions, 1. Could you please explain the Laplacian Matrix in GCN, the GNN explained in this video is spatial-based, and I hope I can get a better understanding of those spectral-based ones. 2. How to draw those beautiful pictures? Could you share the source files? Thanks again!
Hi!
The Laplacian is simply the degree matrix of a graph subtracted by the adjacency matrix. Is there anything in particular you are interested in? :)
My presentations are typically a mix of PowerPoint and active presenter, so I can send you the slides. For that please send an email to deepfindr@gmail.com :)
Hello ,thanks for sharing, could you plz explain how you get learnable method,is it matrix randomly chosen or there is method behind,and is this equal to lablacian method.
One more question ,your embedding only on node level ,right
Hi, the learnable weight matrix is randomly initialized and then updated through back propagation. It's just a classical fully-connected neural network layer.
Yes the embedding is on the node level :)
I am following your playlist on GNN and this is the best content I get as of now.
I have a CSV file and want to apply GNN on it but I don't understand how to find the edge features from the CSV file
Thanks! Did you see my latest 2 videos? They show how to convert a CSV file to a graph dataset. Maybe it helps you to get started :)
@@DeepFindr thanks, hope i will get my answer :-)
Great Explanation! As you pointed out this is one way of attention mechanism. Can you also provide references to other attention mechanisms.
Hi! The video in the description from this other channel explains the general attention mechanism used in transformers quite well :) or do you look for other attention mechanisms in GNNs?
@@DeepFindr yes thanks for sharing that too in the video. I was curious about the attention mechanisms on gnn
OK :)
In my next video (of the current GNN series) I will also Quickly talk about Graph Transformers. There the attention coefficients are calculated with a dot product of keys and queries.
I hope to upload this video this or next week :)
why replacing dot product attn with concat proj + leaky relu?
That's a good point. I think the TransformerConv is the layer that uses dot product attention. I'm also not aware of any reason why it was implemented like that. Maybe it's because this considers the direction of information (so source and target nodes) better. Dot product is cummutative, so i*j is the same as j*i, so it can't distinguish between the direction of information flow. Just an idea :)
Thanks for your awesome explanation, it's very clear and enlightening. But I have a question about the self-attention mechanism in this paper since it seems not very similar to the method in NLP. When it comes to NLP, the most common method of self-attention would do three times linear transform, which need 3 weight matrices `W_q`, `W_k` and `W_v`. Then it uses the results derived from W_q and W_k to get `a_ij`, which is the attention weight between token i and token j in a sentence. In this paper, it firstly uses `W`, `a` and `two node embedding` to compute `alpha_ij` for each node pairs. Then it uses `W`, `alpha` and `all node embedding` to get `new node embedding`.
Is my understanding correct? But I'm curious why the paper don't use different `W` in the two period. For example, we can use 2 weight matrices `W1` and `W2`, when the first `W1` can be used to get `alpha_ij` and the second `W2` can be used to calculate `new node embedding`.
Hi, yes you are right in NLP everything is differentiated with queries, keys and values.
This means, for word vectors they apply different transformations depending on the context (input query, key to map against and output value multiplied with attention).
In the GAT paper all node vectors are transformed with only one matrix W.
So there is no differentiation between q, k and v.
Additionally however, the attention coefficients are calculated with a weight vector, which is not done in the transformers model (there it's the dot product).
So I would say GAT uses just another flavor of attention and we cannot compare them directly - the idea is the same but the implementation slightly different.
I dont know if I understood you correctly, but W is only applied once to transform all nodes. Then there is a second weight vector to calculate a_ij.
Also, there are many variants of GNNs - some also do the same separation as its done in NLP.
For example if you have no self loops, you usually apply a different matrix for a specific node W_1 and for its neighbors W_2 - we can see this like q and k above.
Hope that helps! If not, let me know!
@@DeepFindr Yes, I think I have figured it out. Thank you very much for your detail and clear reply.
how do you think it will behave with complete graphs only ?
Well it will simply calculate attention weights with all neighbor nodes. So every node attends to all other nodes. Its a bit like the transformer that attends to all words.
This paper might also be interesting:
arxiv.org/abs/2105.14491
weight vector are dependent on the nunber of node in graph? if i have a large of graph, i will got a bigger dimension weight vector?
No the weight vector has a fixed size. It is applied to each node feature vector. For example if you have 5 nodes and a feature size of 10, then the weight matrix with 128 neurons could be (10, 128). If you have more nodes, just the batch dimension is bigger.
Hope this answers the question :)
@@DeepFindr thank you so much
@@DeepFindris the generic gnn weighting matrix the same matrix for the entire graph or is it a different matrix for each node but applied to all the neighbours? Also, how does it deal with heterogeneous data where the input feature vectors dimensions are different?
please use brackets and multiplication signs between matrices so i can map the mathematical formula to the visualization
Amazing thank you 🤩
Brilliant video 👍👍👍
Amazing!
Why does the new state calculated have more features than the original state? I dont understand
It's because the output dimension (neurons) of the neural network is different then the input dimension.
You could also have less or the same number of features.