Graph Node Embedding Algorithms (Stanford - Fall 2019)

Поділитися
Вставка
  • Опубліковано 31 жов 2019
  • In this video a group of the most recent node embedding algorithms like Word2vec, Deepwalk, NBNE, Random Walk and GraphSAGE are explained by Jure Leskovec. Amazing class!
  • Наука та технологія

КОМЕНТАРІ • 42

  • @sasankv9919
    @sasankv9919 3 роки тому +26

    Watched it for the third time and now everything makes sense.

  • @i2005year
    @i2005year 3 роки тому +24

    15:30 Basics of deep learning for graphs
    51:00 Graph Convolutional Networks
    1:02:07 Graph Attention Netwirks (GAT)
    1:13:57 Practical tips and demos

  • @ernesttaf
    @ernesttaf 4 роки тому +5

    Great Sir, Congratulations for your oustanding teaching capabilities. It really change my life and my view on Graph Network. Thank you very much, Professor

  • @jayantpriyadarshi9266
    @jayantpriyadarshi9266 4 роки тому +8

    Thank you for this lecture. Really changed my view about GCNs

    • @sanjaygalami
      @sanjaygalami 3 роки тому

      What's the major point that strik to your head? Lets others know if it convenient for you. Thanks

  • @TheAnna1101
    @TheAnna1101 4 роки тому +5

    Awesome video. Please share more on this topic!

  • @Olivia-wu4ve
    @Olivia-wu4ve 4 роки тому +4

    Awesome! Thanks for sharing. Will the hands on session be posted?

  • @sm_xiii
    @sm_xiii 4 роки тому +9

    Prof. Lescovec covered a lot of material in 1.5hr!
    It was very engaging because of his energy and teaching style.

  • @znb5873
    @znb5873 2 роки тому +5

    Thank you so much for making this lecture publicly available. I have a question, is it possible to apply node embedding to dynamic graphs (temporal)? Are there any specific methods/algorithms to follow?
    Thanks in advance for your answer.

  • @Commonsenseisrare
    @Commonsenseisrare 9 місяців тому

    Amazing lecture of gnns.

  • @gautamrajit225
    @gautamrajit225 3 роки тому +8

    Hello. These lectures are very interesting. Would it be possible to share the GitHub repositories so that I can get a better understanding of the code involved in the implementation of these concepts?

  • @MrSajjadathar
    @MrSajjadathar 4 роки тому +3

    @Machine Learning TV yes, and please share the link where you shared all the graph representation learning lectures. i will be thankful..

    • @eyupunlu2944
      @eyupunlu2944 3 роки тому

      I think it is this one: ua-cam.com/video/YrhBZUtgG4E/v-deo.html

  • @vgreddysaragada
    @vgreddysaragada 10 місяців тому

    Great work..

  • @fredconcklin1094
    @fredconcklin1094 2 роки тому

    Classes so fun. The death here is different than the death in Computer Vision due to NSA death.

  • @ShobhitSharmaMTAI
    @ShobhitSharmaMTAI 3 роки тому

    My question at 31:00, what if previous layer embedding of same node is not multiply with Bk like Bk hv(k-1)...what will be the impact
    on embedding...

  • @eugeniomarinelli1104
    @eugeniomarinelli1104 3 роки тому +1

    where do I find the slides fo this lecture

  • @MingshanJia
    @MingshanJia 4 роки тому +10

    Wanna learn the whole series...

    • @wwemara
      @wwemara 3 роки тому +1

      ua-cam.com/play/PL-Y8zK4dwCrQyASidb2mjj_itW2-YYx6-.html

  • @alvin5424
    @alvin5424 4 роки тому +3

    Any plans to publish lectures 17, 18 and 19?

  • @EOh-ew2qf
    @EOh-ew2qf Рік тому

    43:40 I have a question for the slide here. How can you generalize for a new node when the model learns by aggregating the neighborhoods and the new nodes doesn't have a neighborhood yet.

  • @kanishkmair2920
    @kanishkmair2920 4 роки тому +2

    In GCN, we get a single output. In GraphSAGE you concatenate it to keep the info separate. So at each step, the output H^k will have 2 outputs, isn't it? If not, then how are they aggregated and still kept separate

    • @paulojhonny4364
      @paulojhonny4364 4 роки тому

      Kanishk Mair hi, I didn’t understand either. Did you find anything about it?

    • @kanishkmair2920
      @kanishkmair2920 4 роки тому

      I tried to work on pytorch geometric using it (SAGEConv). Not sure how it works but looking at it's source code might help

    • @sm_xiii
      @sm_xiii 4 роки тому +1

      I think the concatenated output is the embedding of the target node. And it depends on the downstream task to further process it, by passing it through more layers, before having the final output.

  • @baharehnajafi9568
    @baharehnajafi9568 4 роки тому +1

    Hi, where can I find the next lectures of him?

    • @MachineLearningTV
      @MachineLearningTV  4 роки тому +7

      We will upload them soon

    • @wwemara
      @wwemara 3 роки тому +2

      ua-cam.com/play/PL-Y8zK4dwCrQyASidb2mjj_itW2-YYx6-.html

  • @ramin5665
    @ramin5665 Рік тому

    Can you share the hands on link?

  • @AdityaPatilR
    @AdityaPatilR 3 роки тому

    Deeper networks will not always be more powerful as you may lose vector features in translation .And due to additional weight matrices the neural networks will be desensitized to feature input.Number of hidden layers should not be greater than input dimension.

  • @phillipneal8194
    @phillipneal8194 4 роки тому

    How do you aggregate dissimilar features ? For example sex, temperature, education level for each node ?

  • @MrSajjadathar
    @MrSajjadathar 4 роки тому +1

    Sir can you please share Tuesday lecture

    • @MachineLearningTV
      @MachineLearningTV  4 роки тому

      The past Tuesday?

    • @MrSajjadathar
      @MrSajjadathar 4 роки тому

      @@MachineLearningTV yes, and please share the link where you shared all the graph representation learning lectures. i will be thankful..

    • @MachineLearningTV
      @MachineLearningTV  4 роки тому +1

      It is available now. Check the new video

  • @deweihu1003
    @deweihu1003 3 роки тому

    On behalf a people from a remote eastern country: niubi!!!!

  • @user-je6nw3ow5z
    @user-je6nw3ow5z 4 роки тому

    Where can I get slides?

    • @ducpham9991
      @ducpham9991 4 роки тому

      you can find it at here web.stanford.edu/class/cs224w/

  • @kognitiva
    @kognitiva 3 роки тому

    ua-cam.com/video/7JELX6DiUxQ/v-deo.html "what we would like to do is here input the graph and over here good predictions will come" Yes, that is exactly it! xD

  • @jcorona4755
    @jcorona4755 11 місяців тому

    Pagan porque vean que tiene más seguidores. De echo pagas $10 pesos por cada video