Deep learning on graphs: successes, challenges | Graph Neural Networks | Michael Bronstein

Поділитися
Вставка
  • Опубліковано 3 чер 2024
  • Conference Website: saiconference.com/IntelliSys
    Deep learning on graphs and network-structured data has recently become one of the hottest topics in machine learning. Graphs are powerful mathematical abstractions that can describe complex systems of relations and interactions in fields ranging from biology and high-energy physics to social science and economics. In this talk, I will outline the basic methods, applications, challenges and possible future directions in the field.
    About the Speaker: Michael Bronstein is a professor at Imperial College London, where he holds the Chair in Machine Learning and Pattern Recognition, and Head of Graph Learning Research at Twitter. Michael received his PhD from the Technion in 2007. He has held visiting appointments at Stanford, MIT, Harvard, and Tel Aviv University, and has also been affiliated with three Institutes for Advanced Study (at TU Munich as a Rudolf Diesel Fellow (2017-), at Harvard as a Radcliffe fellow (2017-2018), and at Princeton (2020)). Michael is the recipient of five ERC grants, Fellow of IEEE, IAPR, and ELLIS, ACM Distinguished Speaker, and World Economic Forum Young Scientist. In addition to his academic career, Michael is a serial entrepreneur and founder of multiple startup companies, including Novafora, Invision (acquired by Intel in 2012), Videocites, and Fabula AI (acquired by Twitter in 2019). He has previously served as Principal Engineer at Intel Perceptual Computing and was one of the key developers of the Intel RealSense technology.
  • Наука та технологія

КОМЕНТАРІ • 22

  • @andreasholzinger7056
    @andreasholzinger7056 Рік тому +4

    An excellent talk on the emerging topic of geometrical deep learning - this could bring topological data analysis which we did for decades to new importance!

  • @IproCoGo
    @IproCoGo 3 роки тому

    This talk provides a helpful understanding of the intuition behind the speaker's work on geometric deep learning. Great talk! Thank you.

  • @Ali-xo9ht
    @Ali-xo9ht 3 роки тому +2

    Amazing talk! I learned a lot and got some ideas for my research. Cheers!

  • @yiyinghu2304
    @yiyinghu2304 3 роки тому +4

    great talk, such a generous share

  • @KwstaSRr
    @KwstaSRr 2 роки тому

    Man, this is a Masterpiece.
    Thank you for sharing.

  • @romigoldner4040
    @romigoldner4040 3 роки тому +1

    Fantastic talk! Thanks for sharing!

  • @torstenschindler1965
    @torstenschindler1965 3 роки тому +6

    Nice lecture!
    “Attention is all you need.” - Is that also true for GNNs?
    Can dynamic graph networks be used to predict chemical reaction outcomes or yields or retrosynthetic pathways?
    The secret sauce of autogluon tabular is bagging, stacking and destillation. How to apply destillation on graph neural networks?

    • @chrisoman87
      @chrisoman87 2 роки тому +2

      Well regarding attention I would look up GAT (Graphical Attention Networks)

  • @leotrisport
    @leotrisport 3 роки тому +1

    Awesome lecture thanks!!

    • @leotrisport
      @leotrisport 3 роки тому

      One thing that’s always confuse me is that seems (or in any example I found) with node/link prediction the whole dataset is a single graph. Is it possible to train/predict with different graphs?

  • @PlexusTen
    @PlexusTen 3 роки тому

    Great talk!

  • @francescos7361
    @francescos7361 Рік тому

    Thanks super powerful.

  • @kellybrower301
    @kellybrower301 3 роки тому +1

    Gold

  • @gems34
    @gems34 2 роки тому

    Yup, food is medicine, many of us overlook this fundamental property. Eat right and the likelihood of illness diminishes exponentially

  • @Yutaro-Yoshii
    @Yutaro-Yoshii Рік тому

    41:45 It was a bunny mesh oirc

    • @Yutaro-Yoshii
      @Yutaro-Yoshii Рік тому

      *iirc

    • @Yutaro-Yoshii
      @Yutaro-Yoshii Рік тому

      I love how carefully picking samples expedites the training time! Great idea that may be applicable in other situations!

  • @gumbo64
    @gumbo64 2 роки тому

    oh I was wondering how all those 3d AI things worked

  • @peabrane8067
    @peabrane8067 2 роки тому

    Hi

  • @siquod
    @siquod 2 роки тому

    I didn't even need to see the bunny to be pretty sure it must be a bunny. Most animal meshes in academia are bunnies.

    • @afbf6522
      @afbf6522 2 роки тому

      Applying Bayes' naive rule, eh?😜

  • @mikefat6189
    @mikefat6189 3 роки тому +1

    This is who you want to be getting your geometry from hahahahaha