Stanford CS224W: ML with Graphs | 2021 | Lecture 10.1-Heterogeneous & Knowledge Graph Embedding

Поділитися
Вставка
  • Опубліковано 8 лют 2025
  • For more information about Stanford’s Artificial Intelligence professional and graduate programs, visit: stanford.io/3p...
    Lecture 10.1 - Heterogeneous Graphs and Knowledge Graph Embeddings
    Jure Leskovec
    Computer Science, PhD
    In this lecture, we first introduce the heterogeneous graph with the definition and several examples. In the next, we talk about a model called RGCN which extends the GCN to heterogeneous graph. To make the model more scalable, several approximated approaches are introduced, including block diagonal matrices and basis learning. At last, we show how RGCN predicts the labels of nodes and links.
    To follow along with the course schedule and syllabus, visit:
    web.stanford.ed...

КОМЕНТАРІ • 17

  • @sachinbs3961
    @sachinbs3961 27 днів тому

    In scalability methods at 16:20 and 18:44, i think the weight matrix should be W(r,l) rather than W(r)? And also Vb(l)? The missing "l" causes a lot of confusion 🙂

  • @emeralduchechukwuhenry2126
    @emeralduchechukwuhenry2126 Рік тому +1

    In Basis Learning (18:42) it is unclear if the learned scalar a_rb are relation specific. which means the importance for each matrix V will be a vector rather than a scalar.

  • @rainergog
    @rainergog 11 місяців тому +2

    Just one error remark: On slide 13, the term "mv" (m subscript v) in the aggregation function (bottom) is missing. It has been corrected in the slides that can be downloaded.

  • @karishmadixit7970
    @karishmadixit7970 Рік тому +2

    When choosing negative edges for the training, do you also need to make sure that negative edge is not a validation edge?

    • @psebsellars
      @psebsellars 3 місяці тому

      The negative edge should not exist at all, in any of sets

  • @ahmedhossamali4255
    @ahmedhossamali4255 2 роки тому +2

    In Basis Learning (18:42), it is unclear how we get the basis matrices (V_b). Do we set them to be trainable just like the importance weight coefficients (a_rb)?

    • @Black_White_Knowledge
      @Black_White_Knowledge 2 роки тому

      yes

    • @emeralduchechukwuhenry2126
      @emeralduchechukwuhenry2126 Рік тому

      @@Black_White_Knowledge if this is the case, then this approach will require more computation and isnt better than the RGCN approach.

    • @shubhampatel6908
      @shubhampatel6908 Рік тому

      @@emeralduchechukwuhenry2126 nope, here we only need different (total B different) V_b matrices while in RGCN appraoch we need as many total matrices at no. of unique relations which can be hundreds or even more in real graphs. B is also decided by us which is generally around 10 if I heard correctly.

  • @BorisVasilevskiy
    @BorisVasilevskiy Рік тому

    RCGN doesn't use the node type, only the edge type, does it? Just making sure I didn't miss anything.
    Overall, the node type can often be derived from the relation type, e.g. in citation network. Thus the node type may be not important at all.

    • @shubhampatel6908
      @shubhampatel6908 Рік тому

      can you eloberate what do you mean by node type ? if the answer is node prediction or classification then yes it can be done with RCGN with the only difference being multiple W matrices instead of 1 W matrix in standard GNNs. I think the prof passed it very quickly at 21:50 as it was pretty straightforward and rather spent a lot of time explaining link type as it is pretty different and complex in RCGN.

  • @vaibhavchetri8819
    @vaibhavchetri8819 Рік тому +1

    Where can I find the Coding implementation of the GNN and the lectures you provide . I need some tutorial rather than an unexplained collab. I will be implementing it on a project based on twitter data set .Please help !!

  • @ir0nt0ad
    @ir0nt0ad Місяць тому +1

    I wish he hadn't used "embedding" and "message" interchangeably, it keeps throwing me off

  • @shaozefan8268
    @shaozefan8268 2 роки тому +1

    This class keep presenting new concepts without clear introduction, or giving out some intuition that does not convincing and full of weakpoints. I am so disappointed

    • @keysky_1622
      @keysky_1622 2 роки тому +2

      Maybe because GNN is a relatively unexplored area. Have you found alternatives to the class?

    • @shaozefan8268
      @shaozefan8268 2 роки тому +2

      @@keysky_1622 no, this is already the best one 🤣. I have to admit my disappointment may come from my weak background in this area😭

    • @BorisVasilevskiy
      @BorisVasilevskiy Рік тому +2

      This particular lecture relies a lot on previous ones, maybe that's the reason. What specific concepts did you find not clear enough?