Theoretical Foundations of Graph Neural Networks

Поділитися
Вставка
  • Опубліковано 26 гру 2024

КОМЕНТАРІ • 59

  • @LewisKnox-b4f
    @LewisKnox-b4f 18 днів тому +3

    Incredible clarity here despite the challenging material. There's no other way to say it: Petar is goated.

  • @kristofhorvath3812
    @kristofhorvath3812 3 роки тому +5

    This is one of the cleanest, most sophisticated and organized scientific speeches I have ever heard...

  • @leodu561
    @leodu561 3 роки тому +60

    Petar's talk are great as always! (I remember attending his talk while at Google lol).
    Timestamps for those looking to rewatch specific sections :)
    0:00 - Introduction by Pietro Lio
    1:10 - Overview
    1:56 - 1. Fantastic GNNs in the Wild
    6:52 - 2. Talk Roadmap
    9:00 - 3. Towards GNNs from first principles
    10:34 - 4. Permutation invariance and equivariance
    15:42 - 5. Learning on Graphs
    20:22 - 6. Message passing on graphs
    24:34 - 7. Perspectives on GNNs
    25:42 - 7.1 Node Embedding Techniques
    29:39 - 7.2 Natural Language Processing
    31:23 - 7.3 Spectral GNNs
    41:17 - 7.4 Probabilistic Graphical Models
    45:09 - 7.5 Graph Isomorphism Testing
    48:53 - 7.6 Geometric Deep Learning
    50:23 - 7.7 Historical Concepts
    51:15 - 7.8 Computational Chemistry
    52:22 - Acknowledgements and Q&A

  • @epiccabbage6530
    @epiccabbage6530 10 днів тому +1

    Most of these concepts were already at least somewhat familiar to me but the way he connects them wholistically makes me feel like I'm learning them for the first time

  • @vladansaracpv
    @vladansaracpv 3 роки тому +9

    Beautiful presentation. Dr Velickovic is one of the best lecturers I've heard in my life. Everything he says is so clear and concise. Add his charisma on top of all that and you can understand why he attracts more and more people to study GNNs. We are so proud to have him

  • @KyleCranmer
    @KyleCranmer 3 роки тому +8

    Excellent talk Petar, so useful to have these different perspectives brought together in one consistent framing.

  • @adityamishra348
    @adityamishra348 3 роки тому +6

    Great talk! The first 20 minutes are simply brilliant! Kind of first principles explanation that I dream of when starting any new type of topic :)

  • @nguyenthanhdat93
    @nguyenthanhdat93 2 роки тому +1

    great presentation. Thank you for sharing, Dr Velickovic

  • @syleshgupta5957
    @syleshgupta5957 Рік тому +1

    It was way informative and slides are self explanatory for the people with basic understanding of math equations :) Thank you !

  • @ceevaaaaa
    @ceevaaaaa 3 роки тому +2

    Thank you very much! I just completed my undergrad, and I am in the process of discovering new ideas and topics to work upon and learn more. These kinds of videos really help me (esp as a young graduate who doesn't have much idea about multiple topics but want to discover more).

  • @pw7225
    @pw7225 3 роки тому +1

    Your presentation skills only became better since Cambridge times. And they were stellar then already.

  • @coder8i
    @coder8i 3 роки тому +2

    Petar! This is solid work. Clear thinking and speaking.

  • @abyoussef19
    @abyoussef19 3 роки тому +2

    We need more lectures like this! Nice Lecture!

  • @love12xfuture
    @love12xfuture 2 роки тому +1

    amazing talk! I like the way that you connect concepts with applicational and historical context. It makes me motivated to make this talk making all senses to a 7 year-old or a 107 year-old (:

  • @tasveerahmad5002
    @tasveerahmad5002 3 роки тому +1

    Very nice lecture talk. Good GNN resources, tools and exposure

  • @emmarocheteau5788
    @emmarocheteau5788 3 роки тому +3

    Rewatching some of this talk - it is that good!

  • @blackguardian89
    @blackguardian89 3 роки тому +1

    Great talk! It definitely improved my understanding about GNNs. Thank you!

  • @amiltonwong
    @amiltonwong 3 роки тому +7

    Hi, Petar, thanks a lot for the talk recording. Could you also release the slides of your talk?

    • @petarvelickovic6033
      @petarvelickovic6033  3 роки тому +5

      They are provided in the video description now :)

    • @amiltonwong
      @amiltonwong 3 роки тому

      @@petarvelickovic6033 Thanks a lot again :)

  • @bayrameda
    @bayrameda 3 роки тому +2

    Hi Petar, it's been a nice reframing of GNNs, thanks!
    Noting that GAT can treat non-homophilic graphs strikes this analogy to me: If propagation is error smoothing then attention makes edge-aware smoothing (in image processing).

  • @alexmorehead6723
    @alexmorehead6723 3 роки тому +1

    Thank you for the great talk, Petar!

  • @kaanyolsever1149
    @kaanyolsever1149 3 роки тому +1

    Great presentation. very entertaining and informative. Thanks Petar

  • @daoudpiracha9891
    @daoudpiracha9891 3 роки тому +1

    Thank you Petar for this talk!

  • @ryderbrooks1783
    @ryderbrooks1783 2 роки тому

    This gentleman is very good at this.

  • @GorsanMestiri
    @GorsanMestiri 3 роки тому +1

    Excellent talk, Thank you so much.
    I'll be more than happy if you share the best resources to dive into GNN applied to combinatorial optimisation problems. 🙏

    • @petarvelickovic6033
      @petarvelickovic6033  3 роки тому +2

      Thank you for the kind words!
      As a matter of fact, we've very recently put out a survey on GNNs for combinatoral tasks:
      arxiv.org/abs/2102.09544

    • @GorsanMestiri
      @GorsanMestiri 3 роки тому +1

      @@petarvelickovic6033 This is amazing. Thanks

  • @calebparks8318
    @calebparks8318 2 роки тому +1

    This was a great talk. Thank you!

  • @saeidsas2113
    @saeidsas2113 Місяць тому

    Thank you very much for the presentation; it was very insightful. I have a minor question. As you pointed out, the situation is a bit different when it comes to the continuous node feature task. I wonder if some care needs to be taken when fitting a graph representation in the convolutional neural network? Thank you for your consideration.

  • @NelsonOtuma
    @NelsonOtuma 2 роки тому

    This is an Interesting area of study, you would have dropped the link for pdf books on comments

  • @sacramentofwilderness6656
    @sacramentofwilderness6656 3 роки тому +1

    Thanks for a great talk! Very interesting and inspiring! I wonder, is there any research for graph networks in the limit, amenable for analytical treatment, like the NTK mode? Are there any special properties of the loss landscapes, not present in more common fully-connected NNs or usual CNNs?

    • @petarvelickovic6033
      @petarvelickovic6033  3 роки тому

      Thank you for the kind words!
      While it may not fully align with what you're after, I think you might find the recent paper from Xu et al. on (G)NN extrapolation very interesting: arxiv.org/abs/2009.11848
      Herein, the authors make several (nicely visualised) geometrical arguments on the properties of GNNs in the extrapolation regime. The main tool and setting for their analysis is, indeed, the NTK mode.

  • @JorGe-eu3wi
    @JorGe-eu3wi 3 роки тому +1

    Excellent presentation! A question came up though, which flavour of gnn layer could we say that graphsage uses for the embedding algorithm? Could the learned weights matrices W be considered as fixed weight inputs of the convolutional GNN layer?

    • @petarvelickovic6033
      @petarvelickovic6033  3 роки тому +3

      An excellent question -- thanks for asking!
      This would depend on which type of GraphSAGE we're looking at :)
      GraphSAGE-mean, GraphSAGE-GCN and GraphSAGE-pool are all conv-GNNs. They transform every node in isolation, they then use a permutation invariant aggregator, and do not take the receiver node at all into account. The matrix W is just part of either of the two functions (psi or phi), depending on whether it's applied to individual neighbours or aggregated vectors.
      On the other hand, GraphSAGE-LSTM is not permutation equivariant, and hence does not fit any of the three flavours. It is possible to 'massage' the LSTM aggregator to make it fit, however; see Janossy Pooling (Murphy et al.).
      Lastly, I'd note that GraphSAGE's main contribution is its scalability to inductive learning on very large graphs (as per its title ;) ) through neighbourhood sampling. Many of the embedding algorithms it proposes are not unlike models already previously proposed in the literature (e.g. the GCN of Kipf & Welling).

    • @JorGe-eu3wi
      @JorGe-eu3wi 3 роки тому

      @@petarvelickovic6033 thank you very much for the answer :D

  • @amitsett8117
    @amitsett8117 3 роки тому +2

    Found this on reddit. Great talk.

  • @kndlt
    @kndlt 2 роки тому +1

    Would it be possible for someone without too much ML experiences (but have CS degree) learn theoretical part of GNN inside out within a month?

    • @petarvelickovic6033
      @petarvelickovic6033  2 роки тому

      It should be possible, in my opinion.
      Especially since GNNs are fundamentally discrete structures, they align very well with the kind of computation typically studied in a theoretical CS degree.

  • @GiovannaIwishyou
    @GiovannaIwishyou 3 роки тому +2

    Hvala, Petre :)

  • @stefanspalevic
    @stefanspalevic 3 роки тому +3

    Hvala Petre!! :)

  • @kristofneys2349
    @kristofneys2349 3 роки тому +1

    thank you soo much - very good and useful!

  • @fairuzshadmanishishir8171
    @fairuzshadmanishishir8171 3 роки тому

    Relation between pgm and graph nn is not clear
    can you clear the concepts?

  • @DeepFindr
    @DeepFindr 3 роки тому +1

    Really great summary! :)

  • @sachin63442
    @sachin63442 3 роки тому

    why can't you use XGboost or decision trees for node level classification instead of GCN?

    • @petarvelickovic6033
      @petarvelickovic6033  3 роки тому +2

      Of course you can! Sergey Ivanov et al. recently showed it's a very strong baseline: arxiv.org/abs/2101.08543

    • @sachin63442
      @sachin63442 3 роки тому

      @@petarvelickovic6033 so when to use GCN over XGB or decision trees? Not combined.

    • @petarvelickovic6033
      @petarvelickovic6033  3 роки тому +3

      As far as I know, boosting and decision trees are great for dealing with data that is assumed tabular, i.e. where you don't assume your nodes are linked together. GCNs (and/or more expressive GNNs) should be used whenever you assume that the links between your data points are actually meaningful and should be exploited.

  • @kejianshi9196
    @kejianshi9196 2 роки тому +2

    pretty fast . rewatching 3~4 times is helpful.

  • @thegreatlazydazz
    @thegreatlazydazz 2 роки тому +1

    Matrices that commute are jointly diagonalizable. I understand this as if AB = BA, then A and B have the same eigen vectors?
    However this cannot be true as I commutes with any matrix, and any vector is an eignevector of I.

    • @petarvelickovic6033
      @petarvelickovic6033  2 роки тому

      Good sighting! I didn't have the time in the talk to get into the nuances of this, but essentially, you'll have exactly the same eigenbasis if the matrices commute _and_ have no repeated eigenvalues. If you have repeated eigenvalues (as is the case for I), this theorem becomes more ambiguous to apply.
      en.m.wikipedia.org/wiki/Commuting_matrices has a bucket list of properties wrt commutativity and diagonalisation.

    • @thegreatlazydazz
      @thegreatlazydazz 2 роки тому

      I understand the assertion, with no repeated eigen values. I thought about it a bit more, for repeated eigen values, it's like there exists a change of basis that diagonalize all of them? Any unitary matrix diagonalizes I. Thanks for the reference. The shift matrix was an excellent hint for the next property.

  • @reidwyde5723
    @reidwyde5723 3 роки тому +1

    Thank you so much!

  • @sahar2003
    @sahar2003 3 роки тому +1

    Good talk. Thanks.

  • @tae898
    @tae898 3 роки тому +4

    If you are using Transformers, you are using GNNs!

  • @insightfool
    @insightfool 3 роки тому

    Brain.....melted

  • @kprakash9665
    @kprakash9665 3 роки тому

    Theoretically also not clearly explained and practically not explained worst University

  • @sb-xq1sr
    @sb-xq1sr 2 роки тому +2

    37:17 Why can't an adjacency matrix be eigen decomposed? AFAIK, any real symmetric matrix is diagonalizable. en.wikipedia.org/wiki/Eigendecomposition_of_a_matrix#Real_symmetric_matrices
    I believe you were trying to substitute adjacency matrix with a positive semi-definite matrix that can also express all adjacency properties. That way, the eigen decomposed diagonal matrix Λ only has non-negative values.

    • @petarvelickovic6033
      @petarvelickovic6033  2 роки тому +2

      Thanks for your note! You are certainly correct, and this is one aspect I hadn't qualified well enough in the talk.
      Yes, any undirected adjacency matrix can be eigendecomposed. However, the Laplacian's eigendecomposition has nicer properties. It guarantees all eigenvalues are nonnegative (while the multiplicity of the zero eigenvalues can be used to track connected components), and the resulting eigenvectors can be directly used to approximate many interesting problems on the graph (eg. optimal cuts).