CS231n Winter 2016: Lecture 6: Neural Networks Part 3 / Intro to ConvNets

Поділитися
Вставка
  • Опубліковано 28 вер 2024
  • Stanford Winter Quarter 2016 class: CS231n: Convolutional Neural Networks for Visual Recognition. Lecture 6.
    Get in touch on Twitter @cs231n, or on Reddit /r/cs231n.

КОМЕНТАРІ • 47

  • @caner19959595
    @caner19959595 4 роки тому +18

    That project report being mentioned at 27:13 had been accepted to one of the ICLR workshops on the following year and has over 500 citations up until now. Impressive stuff.

  • @leixun
    @leixun 3 роки тому +11

    *My takeaways:*
    1. Parameter updates: optimizers, such as momentum, Nesterov momentum, AdaGrad, RMSProp, Adam 3:53
    2. Learning rate 28:20
    3. 2nd order optimizers 30:53
    4. Evaluation: model ensembles 36:19
    5. Regularization: dropout 38:25
    6. Gradient checking 56:55
    7. Convolutional Neural Networks 57:35

    • @ze2411
      @ze2411 3 роки тому +1

      thanyou

    • @leixun
      @leixun 3 роки тому +1

      @@ze2411 You’re welcome

  • @citiblocsMaster
    @citiblocsMaster 6 років тому +105

    1:04:50 "This might be useful in self driving cars". One year later, head of AI at Tesla.

    • @notanape5415
      @notanape5415 4 роки тому +21

      (Car)-(Path)y - It was always in the name.

    • @666enough
      @666enough 3 роки тому

      @@notanape5415 Wow this coincidence is unbelievable.

  • @champnaman
    @champnaman 7 років тому +9

    @15:10, Andrej says that according to recent work, local minimums' are not a problem for large networks. Could anyone point me to these papers? I am interested to read these results.

  • @mihird9
    @mihird9 5 років тому +4

    57:30 Intro to CNN

  • @tthtlc
    @tthtlc 6 років тому +7

    Just found that the slides at Stanford website is updated with the 2017 slides + videos. Is there any way to get the original 2016 slides? The lectures are as classic as those of Andrew Ng.

  • @vijaypalmanit
    @vijaypalmanit 9 місяців тому

    does he speaks 1.5x by default 😛

  • @LearnWithAbubakr
    @LearnWithAbubakr 7 років тому +1

    he speaks so fast !!

    • @zaidalyafey
      @zaidalyafey 7 років тому +1

      Muhammad Abu bakr I watch at 1.25x .

  • @nikolaikrot8516
    @nikolaikrot8516 4 роки тому +1

    best viewed at 0.8 speed:)

  • @adityapasari2548
    @adityapasari2548 7 років тому +4

    poor cat :(

  • @irtazaa8200
    @irtazaa8200 8 років тому +23

    Did cs231n in 2015. great to see the videos released to public now.
    Good Job Stanford!

  • @vivekloganathan9386
    @vivekloganathan9386 2 роки тому +9

    For someone curious like me @ 43:34
    (Someone's siri mistakenly tried to recognize.. and said this..)
    Siri: "I am not sure what you said"

  • @ArdianUmam
    @ArdianUmam 7 років тому +16

    43:35 xD

  • @qwerty-tf4zn
    @qwerty-tf4zn 3 роки тому +2

    It's getting exponentially difficult

  • @twentyeightO1
    @twentyeightO1 5 місяців тому +1

    This is helping me quite a lot, thanks!!!

  • @ThienPham-hv8kx
    @ThienPham-hv8kx 2 роки тому +1

    Summary : start from a simple gradient decent : x += - learning_rate * dx , but when we applied this gradient decent in a big dataset , it will take a long time to calculate derivative for each data point (neural unit). So we will use Stochastic Gradient Descent (SGD) to randomly choose a neural unit in a layer instead of whole units. It will reduce the time to calculate derivative.
    SGD 's still slow because it jitter on the way of convergence because of random. Then, we have other method to help converge faster: SGD + Momentum , Adagrad , RMSProp , Adam . Each of them will have learning rate. We should find the ideal learning rate for each of data set. For ex: default learning rate of Adam = 0.001 in Keras.
    We can use dropout to prevent overfitting.
    To prevent overfitting , we have 3 ways: increase dataset , simplify network (dropout , reduce number of layer) , preprocessing data (data augmentation)

  • @boringmanager9559
    @boringmanager9559 6 років тому +8

    Some guy playing dota vs neural network - over million views.
    A genius explaining how to build a neural network - 40k views.

    • @pawelwiszniewski
      @pawelwiszniewski 5 років тому +3

      People can relate to playing a game much more easily. Understanding how it works - it's hard :)

  • @mostinho7
    @mostinho7 4 роки тому +4

    Intro to convnets is all history, skip to next lecture for convnets

  • @iliavatahov9517
    @iliavatahov9517 8 років тому +2

    Great job! The end is so inspiring!

  • @ArdianUmam
    @ArdianUmam 7 років тому +1

    In 9:12, the SGD refers to literally SGD (train with ONLY 1 example randomly) or refers to mini-batch? Because in the web lecture notes, it states that term SGD often enough to be used for "mini-batch" actually (not literally SGD using ONLy 1 example).

  • @BruceLee-zw3wr
    @BruceLee-zw3wr 7 років тому +2

    Pretty hard stuff

  • @DailosGuerra
    @DailosGuerra 7 років тому +2

    43:33 Funny moment :)))

  • @sokhibtukhtaev9693
    @sokhibtukhtaev9693 6 років тому +1

    at 44:06, Is he saying that dropout is applied on different neurons in each epoch? Say, I have x -10-10 - y network (x -input, y- output, 10s - hidden layers). In one epoch (that's forward prop + backprop) dropout is applied to,say, 3rd, 5th, 9th neurons of first hidden layer and 2nd, 5th, 8th neurons of second hidden layer, in the second epoch dropout is applied to 5th,6th,10th neurons of first layer and 1st, 7th, 10th neurons of second hidden layer. Does it mean we kind of have as many models as our epoch? Can someone clear this up for me?

    • @souvikbhattacharyya2480
      @souvikbhattacharyya2480 21 день тому

      Yes, I think youu are right, but I think you are talking about mini batches and not epochs. Epoch = one full pass (fprop+backprop) for each data points in your training set i.e. 1 epoch = several mini bacthes

  • @omeryalcn5797
    @omeryalcn5797 6 років тому

    there is a one issue . We don't suggest normalized data when it is a image. but when we use batch normalization , we normalize data. Is that a problem ?

  • @randywelt8210
    @randywelt8210 8 років тому

    8:50 noisy signal = how about usage of kalman filter??

  • @bayesianlee6447
    @bayesianlee6447 6 років тому

    What is vanilla update? :)

  • @champnaman
    @champnaman 7 років тому

    Funny moment at @20:40

  • @fatihbaltac1482
    @fatihbaltac1482 5 років тому

    Thank's a lot :)

  • @jiexc4385
    @jiexc4385 4 роки тому

    What happended on 43:38, what's so funny?

    • @vivekloganathan9386
      @vivekloganathan9386 2 роки тому +3

      (Someone's siri mistakenly tried to recognize.. and said this..)
      Siri: "I am not sure what you said"

  • @nguyenthanhdat93
    @nguyenthanhdat93 7 років тому +2

    The best AI course I have ever taken. Thank you, Andrej!!!

  • @WahranRai
    @WahranRai 6 років тому +3

    33:12 L-BFGS not be confused with LBGT

  • @nikolahuang1919
    @nikolahuang1919 6 років тому

    The big idea of momentum update method is smart! But it is obvious that this updating method is an interim method.