Graph Representation Learning: William L. Hamilton - 2021 McGill AI Learnathon

Поділитися
Вставка
  • Опубліковано 21 тра 2024
  • Recent Advances and Open Challenges
    Graph-structured data is ubiquitous throughout the natural and social sciences, from telecommunication networks to quantum chemistry. Building relational inductive biases into deep learning architectures is crucial if we want systems that can learn, reason, and generalize from this kind of data. Recent years have seen a surge in research on graph representation learning, most prominently in the development of graph neural networks (GNNs). Advances in GNNs have led to state-of-the-art results in numerous domains, including chemical synthesis, 3D-vision, recommender systems, question answering, and social network analysis.
    In the first part of this talk I will provide an overview and summary of recent progress in this fast-growing area, highlighting foundational methods and theoretical motivations. In the second part of this talk I will discuss fundamental limitations of the current GNN paradigm. Finally, I will conclude the talk by discussing recent progress my group has made in advancing graph representation learning beyond the GNN paradigm.
    ---
    Learn more about William L. Hamilton's work: williamleif.github.io/
    Read more about the 2021 McGill AI Learnathon:
    Check out McGill AI: https//mcgillai.com
  • Наука та технологія

КОМЕНТАРІ • 7

  • @mkwarlock
    @mkwarlock 2 місяці тому +1

    Mr. Hamilton's book is the best one on GNNs that I've come across, hands down.
    What a king, thank you!

  • @DataTranslator
    @DataTranslator 9 днів тому

    Excellent 🙏🏾. Thank you

  • @InquilineKea
    @InquilineKea 3 місяці тому

    This is way better than most

  • @LeslieSolorzanov
    @LeslieSolorzanov 3 роки тому +3

    Could you add some of the examples you said you could give to go from shallow to deep? and how to include node's features?

  • @gaelanmelanson8412
    @gaelanmelanson8412 2 роки тому

    Some of the figures look taken from a 2018 talk by Jure Leskovec, for example:
    ua-cam.com/video/fbRDfhNrCwo/v-deo.html
    looks very similar to:
    ua-cam.com/video/oQL4E1gK3VU/v-deo.html

    • @emilymcmilin94
      @emilymcmilin94 Рік тому

      Looks like they are co-authors: www-cs.stanford.edu/people/jure/pubs/graphrepresentation-ieee17.pdf

    • @jryzu8043
      @jryzu8043 Рік тому

      Considering both were working at Stanford when they pioneered GNNs, it’s likely there was already some collaboration between them