Geometric Intuition for Training Neural Networks

Поділитися
Вставка
  • Опубліковано 24 лис 2019
  • Leo Dirac (@leopd) gives a geometric intuition for what happens when you train a deep learning neural network. Starting with a physics analogy for how SGD works, and describing the shape of neural network loss surfaces.
    This talk was recorded live on 12 Nov 2019 as part of the Seattle Applied Deep Learning (sea-adl.org) series.
    References from the talk:
    Loss Surfaces of Multilayer networks arxiv.org/pdf/1412.0233.pdf
    Sharp minima papers:
    -Modern take arxiv.org/abs/1609.04836
    -Hochreiter, Schmidhuber 1997 www.bioinf.jku.at/publications...
    SGD converges to limit cycles: arxiv.org/pdf/1710.11029.pdf
    Entropy-SGD: arxiv.org/abs/1611.01838
    Parle: arxiv.org/abs/1707.00424
    FGE: arxiv.org/abs/1802.10026
    SWA: arxiv.org/pdf/1803.05407.pdf
    SWA implementation in pytorch: pytorch.org/blog/stochastic-w...
  • Наука та технологія

КОМЕНТАРІ • 20

  • @susmitislam1910
    @susmitislam1910 3 роки тому +8

    For those who are wondering, yes, he's the grandson of the late great Paul Dirac.

  • @miguelduqueb7065
    @miguelduqueb7065 2 роки тому +2

    Such insights so easily explained denote a deep understanding of the topic and great teaching skills. I am eager to see more lectures or talks by this author.
    Thanks.

  • @MrArihar
    @MrArihar 4 роки тому

    Really useful resource with intuitively understandable explanations!
    Thanks a lot!

  • @PD-vt9fe
    @PD-vt9fe 4 роки тому +2

    Thank you so much for this excellent talk.

  • @katiefaery
    @katiefaery 4 роки тому

    He’s a great speaker. Really well explained. Thanks for sharing.

  • @uwe_sterr
    @uwe_sterr 3 роки тому +1

    hi leo,
    thanks for this very impressing way of making somewhat complicated concepts so easy to understand with simple but well structured visualisations.

  • @RobertElliotPahel-Short
    @RobertElliotPahel-Short 3 роки тому +1

    This is such a great talk! Keep it up my dude!!

  • @matthewtang1489
    @matthewtang1489 4 роки тому +5

    This is so coooooollll!!!!!!!

  • @oxfordsculler8013
    @oxfordsculler8013 3 роки тому +1

    Great video. Why no more? These are very insightful.

  • @matthewhuang7857
    @matthewhuang7857 Рік тому +2

    Thanks for the speech Leo! I'm now a couple of months into ML and this level of articulation really helped a lot. I know this is probably a rookie mistake in this context but often when it's hard for my model to converge, I thought it's probably because it reaches a 'local minima'. My practice is often significantly bumping up the learning rate to hopefully let the model to kinda leap over and get to a point where it can re-converge. According to what you said, there are evidences conclusively proving there's no local minima in loss functions. I'm wondering which specific papers you were talking about.
    regards,
    Matt

  • @ramkitty
    @ramkitty 3 роки тому

    This is a great lecture that ends at wolframs argument for quantum physics and relativity and what I think is manifest as orch or type contiousness through Penrose twistor collapse

  • @abhijeetvyas7365
    @abhijeetvyas7365 3 роки тому

    Dude, awesome!

  • @berargumen2390
    @berargumen2390 3 роки тому +2

    This video lead me to my "aha" moment, thanks

    • @bluemamba5317
      @bluemamba5317 3 роки тому +3

      Was it the pink shirt, or the green belt?

  • @srijeetful
    @srijeetful 4 роки тому

    nice one

  • @linminhtoo
    @linminhtoo 3 роки тому +3

    very nice (and certainly mindblowing) video, but according to ua-cam.com/video/78vq6kgsTa8/v-deo.html, that complicated loss landscope at 13:51 is not actually a ResNet but a VGG. The ResNet one looks a lot smoother due to the residual skip connections

    • @LeoDirac
      @LeoDirac 3 роки тому +1

      Thanks for the kind words. The creators of that diagram called it a "ResNet" - see the first page of the referenced paper arxiv.org/pdf/1712.09913.pdf . Skip connections make the loss surface smoothER, but remember that these surfaces have millions of dimensions. There are zillions of ways to visualize them in 2 or 3 dimensions, and every view discards tons of information. It's totally reasonable to expect that one view would look smooth and another very lumpy, for the same surface.
      TBH I don't know exactly what the authors of this paper did - they refer to "skip connections" a lot, and talk about resnets with and without them. I'm not sure if they mean "residuals" when they say "skip connections" but I'm not sure I'd call a resnet without RESiduals a RESnet myself. If you remove the residuals it's architecturally a lot closer to a traditional CNN like VGG / AlexNet / LeNet and not what I would call a ResNet at all.

  • @underlecht
    @underlecht 3 роки тому

    That "circle" idea is somwhat. I think it depends on implementation of SGD, if you do not have slope to that direction, how do you make going on edge of that circle? Do you really use randomized batches? Many questions

  • @hanyanglee9018
    @hanyanglee9018 2 роки тому

    17:00 is all you need.

  • @elclay
    @elclay 3 роки тому

    please the slides sir