AMAZING ... all tutorials start with either very basic level and leave u high and dry when it reaches to actual point, or they start with a point that u don't have any idea... this tutorial is amazing
CrossEntropyLoss already does log softmax behind the scenes. On top of that F.softmax is applied at the end of the model forward which is not needed if nn.CrossEntropyLoss is used. This is before PyTorch Geometric is introduced
Can anyone please tell me the prerequisites to start with GNN? I'm new to neural networks. Although I have some experience in ML but neural networks are still new to me.
model = pyg_nn.GAE(Encoder(dataset.num_features, channels)).to(dev) model.split_edge(data) --- get error saying ''GAE' object has no attribute 'split_edges'", Just checked the documentation, it is true that the latest version of GAE object doesn't have 'split_edges' functions. so random split?
I'm wondering why that pool layer is necessary for the graph level task? Can't we just use some linear layers to predict some property that corresponds to the whole graph? Somebody can help me with this? Ty!
I think it's due to the dimension? For graph level tasks we want the whole graph represented by a vector, so this pooling is transferring the node embedding matrix to a vector?
I’m like two years late to this question, but the node property matrix will be of different dimension for each graph size, so we can’t train a neural network on it directly. We could train a sequential neural network, but we don’t want to get different results based different edge orderings. The most naive approach would be to just take a sum or average of all node embedding and use that as a graph embedding. It might be enough in some cases.
PyG starts from 33:33
You are my hero. Love you
AMAZING ... all tutorials start with either very basic level and leave u high and dry when it reaches to actual point, or they start with a point that u don't have any idea... this tutorial is amazing
CrossEntropyLoss already does log softmax behind the scenes. On top of that F.softmax is applied at the end of the model forward which is not needed if nn.CrossEntropyLoss is used. This is before PyTorch Geometric is introduced
No only not needed, but 2x softmax will break your model.
Really great lecture content and lecturer.
i'm unworthy of this presentation. good job.
Thank you very much. it was very good tutorial of gnn neural network
Really good tutorial.
Thanks for this amazing tutorial!! was really helpful for me☺
There is something to simplify :nn.CrossEntropy = F.nll_loss(F.log_softmax(x), label)
Can anyone please tell me the prerequisites to start with GNN? I'm new to neural networks. Although I have some experience in ML but neural networks are still new to me.
you can see the coursera course deep learning, or cs229 stanford
Nice!
Hi
Plz upload more videos of Ml, DL by Stanford . thx
Hey, do you have more tutorials in coding from cs22aw like these?
If anyone knows where I can find more coding playlist please share.
what is the best GNN library as of now 2021 for PyTorch?
i have the same question
model = pyg_nn.GAE(Encoder(dataset.num_features, channels)).to(dev)
model.split_edge(data)
--- get error saying ''GAE' object has no attribute 'split_edges'",
Just checked the documentation, it is true that the latest version of GAE object doesn't have 'split_edges' functions.
so random split?
oh, it is negative sampling
I'm wondering why that pool layer is necessary for the graph level task? Can't we just use some linear layers to predict some property that corresponds to the whole graph? Somebody can help me with this? Ty!
I think it's due to the dimension? For graph level tasks we want the whole graph represented by a vector, so this pooling is transferring the node embedding matrix to a vector?
I’m like two years late to this question, but the node property matrix will be of different dimension for each graph size, so we can’t train a neural network on it directly. We could train a sequential neural network, but we don’t want to get different results based different edge orderings. The most naive approach would be to just take a sum or average of all node embedding and use that as a graph embedding. It might be enough in some cases.
pool is necessary when ure dealing with sequenced data like queries
👍
hey, what about other seminar tapes from the cs224w?
You can find them here:
snap.stanford.edu/class/cs224w-videos-2019/
hey the link is private is there any public?
@@lindseyai4843
@@lindseyai4843 Can you tell us the username and password?
gg, good tutorial ✌
How do i download the code used in the presentation?
IT IS IN THE COLAB :)
码一下