Tom Goldstein: "What do neural loss surfaces look like?"

Поділитися
Вставка
  • Опубліковано 20 чер 2024
  • New Deep Learning Techniques 2018
    "What do neural loss surfaces look like?"
    Tom Goldstein, University of Maryland
    Abstract: Neural network training relies on our ability to find “good” minimizers of highly non-convex loss functions. It is well known that certain network architecture designs (e.g., skip connections) produce loss functions that train easier, and well-chosen training parameters (batch size, learning rate, optimizer) produce minimizers that generalize better. However, the reasons for these differences, and their effects on the underlying loss landscape, are not well understood. In this paper, we explore the structure of neural loss functions, and the effect of loss landscapes on generalization, using a range of visualization methods. First, we introduce a simple “filter normalization” method that helps us visualize loss function curvature, and make meaningful side-by-side comparisons between loss functions. Using this method, we explore how network architecture affects the loss landscape, and how training parameters affect the shape of minimizers.
    Institute for Pure and Applied Mathematics, UCLA
    February 8, 2018
    For more information: www.ipam.ucla.edu/programs/wor...
  • Наука та технологія

КОМЕНТАРІ • 14

  • @hxhuang9306
    @hxhuang9306 5 років тому +13

    As a noob I just want to see what loss functions in more complex networks look like. Was not dissappointed.

  • @dshahrokhian
    @dshahrokhian 4 роки тому +3

    Great Video Summary of all the work in the Maryland lab!

  • @AoibhinnMcCarthy
    @AoibhinnMcCarthy 3 роки тому +2

    Great lecture! Very clear of explaining the influence of loss function from networks

  • @ProfessionalTycoons
    @ProfessionalTycoons 5 років тому +4

    Amazing video

  • @dimitermilushev575
    @dimitermilushev575 4 роки тому +2

    Thanks, this is a great video. Do you see any issues/fudamental differences in applying these techniques to sequence models? Is there any research doing so?

  • @XahhaTheCrimson
    @XahhaTheCrimson 3 роки тому +1

    This helps me a lot

  • @joshuafox1757
    @joshuafox1757 6 років тому +4

    How much computational power does it cost to evaluate the loss landscape using this method, compared to a more naive method?

  • @nguyendinhchung9677
    @nguyendinhchung9677 2 роки тому +1

    Very good and funny videos bring a great sense of entertainment!

  • @user-ke5tu6ys7z
    @user-ke5tu6ys7z Рік тому

    Thank you professor!! I love this video.
    38:45 why do we find saddle point? How do we apply saddle point for research?

    • @aaAa-vq1bd
      @aaAa-vq1bd Рік тому +1

      Saddle points identify the points where directions are both upwards and downwards. But why are they useful? Good question. I looked it up:
      “one of the reasons neural network research was abandoned (once again) in the late 90s was *because the optimization problem is non-convex*. The realization from the work in the 80s and 90s that neural networks have an exponential number of local minima, along with the breakout success of kernel machines, also led to this downfall, as did the fact that networks may get stuck on poor solutions. Recently we have evidence that the issue of non-convexity may be a non-issue, which changes its relationship vis-a-vis neural networks.”
      What does this mean? Well, say we want to average the values in some neighborhood which is in n-dimensional space. But we can’t just compute the Gaussian kernel because it becomes (potentially exponentially) worse as we go up dimensions. So we need to unfold the manifold to a 2d Euclidean space (a flat coordinate system). What’s the issue? Local minima (areas which look like minima in a restricted region of a function) can get our averaging machine stuck as it applies a stochastic gradient descent algorithm. And there are exponentially many local minima in our neural network, in general, so we are worried that there’s no guarantee of optimization with neural networks at all. Well shit. The thing is though, that the critical points of high-dimensional surfaces for almost all of the trajectory are saddle points, not local minima. Saddle points pose no problem to stochastic gradient descent. And if there is any randomness in our data it’s exponentially likely that all the local minima are close to the global minima. Therefore local minima are not a problem.
      Basically, saddle points are the highly prevalent critical points in parameter space that don’t pose a problem for the algorithms and architecture we want to use. Local minima do pose a problem but we’ve found that in high dimensions they are only in certain places (near global minima). So you can’t use saddle points in your data for anything special, it’s just that a lot of algorithms (like Newton, gradient descent and quasi-Newton) think saddle points are local minima and thus get stuck much more often than they should. (A side note- there’s something called “saddle-free Newton” which was written about in 2014 but it’s been seen that SGD works just as well without needing to compute a Hessian for a lot of parameters.) Hope that helps a bit.

  • @DonghuiSun
    @DonghuiSun 5 років тому +3

    Interesting research. Does the code have been shared?

    • @onetonfoot
      @onetonfoot 5 років тому +3

      github.com/tomgoldstein/loss-landscape

  • @user-xo9on2of4k
    @user-xo9on2of4k 6 років тому +1

    can i get pdf file? thks