Learning Rate in a Neural Network explained

Поділитися
Вставка
  • Опубліковано 10 гру 2024

КОМЕНТАРІ • 59

  • @abail7010
    @abail7010 5 років тому +63

    This is definitely the most underrated channel when it comes to machine learning of all time.

    • @PunmasterSTP
      @PunmasterSTP 2 роки тому

      Yeah and I hope that UA-cam's machines can learn about it, and start recommending it more! 😎

  • @JimmyCheng
    @JimmyCheng 6 років тому +32

    The structure of this playlist is superb. It throws out concepts that are not familiarized while giving you the bigger picture, so it keeps you wanting for more, and solves these mysteries one by one at a later stage. Very smart, this is a golden way to approach education in general, too sequentialized structure is a bit boring.

    • @deeplizard
      @deeplizard  6 років тому +2

      Hey Ziqiang - Thank you! I'm really glad to hear you like the teaching style and that you're finding value in the content!

  • @AnandaKevin28
    @AnandaKevin28 3 роки тому +4

    This channel is rather a hidden gem. your explanations clears a lot! It is precise and simple, love it!

  • @tymothylim6550
    @tymothylim6550 3 роки тому +4

    I love how you use actual Python code to explain it! It helps me get familiar with the Keras library used for Neural Networks too!

  • @eliko3211
    @eliko3211 3 роки тому +5

    Thank you very much for this series. Your tutorial is professionally ordered and delivered clearly and coherently. You are a natural teacher:)

  • @DomnulFisk1
    @DomnulFisk1 3 роки тому +3

    These videos are amazing. I love the short format. They contain a balance between theory, mathematics, and code and they are very engaging. Great job!

  • @WeakCoder
    @WeakCoder 4 роки тому +4

    wow, this channel is seriously underrated....thanks a lot for such great content!

  • @deeplizard
    @deeplizard  6 років тому +1

    Check out the corresponding blog and other resources for this video at: deeplizard.com/learn/video/jWT-AX9677k

  • @gwayx105
    @gwayx105 6 років тому +3

    Your Keras playlist...FANTASTIC!

  • @TUSHARSHARMA-px2se
    @TUSHARSHARMA-px2se 3 роки тому +1

    One of the best videos to learn deep learning

  • @amritakaul3300
    @amritakaul3300 2 роки тому +1

    LOVED THE WAY YOU TEACH....

  • @PunmasterSTP
    @PunmasterSTP 2 роки тому +1

    Learning rate? More like "Whoa, these videos are great." Thanks again for sharing!

  • @edkdoc
    @edkdoc 5 років тому +2

    love all the videos. Thank you so much for putting these together.

  • @durarara911
    @durarara911 2 роки тому +1

    Really nicely explained! Thanks!

  • @mechanicalbaba2484
    @mechanicalbaba2484 3 роки тому +2

    I must say, you have a lovely voice!!

  • @thespam8385
    @thespam8385 5 років тому

    {
    "question": "The learning rate determines in part the size of the:",
    "choices": [
    "Adjustment made to the weights",
    "Neural network",
    "Hidden layers",
    "Hyperparameters"
    ],
    "answer": "Adjustment made to the weights",
    "creator": "Chris",
    "creationDate": "2019-12-11T04:13:32.163Z"
    }

    • @deeplizard
      @deeplizard  5 років тому

      Thanks, Chris! Just added your question to deeplizard.com

  • @greedyfishbones
    @greedyfishbones Рік тому +1

    very concise ,thank u

  • @koonsickgreen6272
    @koonsickgreen6272 5 років тому +1

    Your videos are so very helpful! Thank you!!

  • @MrAbhay0007
    @MrAbhay0007 4 роки тому +1

    More power to you. This is so good.

  • @ravikishore5357
    @ravikishore5357 4 роки тому +1

    Thanks for your videos .

  • @Sachin-bg8gb
    @Sachin-bg8gb 5 років тому +1

    Just awesome... Love you... Keep going... ✌🙌

  • @qusayhamad7243
    @qusayhamad7243 4 роки тому

    thank you very much for this clear and helpful explanation.

  • @pouryafarzi7635
    @pouryafarzi7635 4 роки тому +1

    really fruitful ... best for your effort

  • @seanyeo673
    @seanyeo673 3 роки тому +2

    I noticed you use a regularizer in your second hidden layer. Didn't see that in the previous video. According to some online sources, it seems to be used to handle overfitting. I'm not sure why it's 0.01, but I'm hoping this is covered in one of your future videos.
    explanations are brief and understandable, and although there doesn't appear to be insane depth, with my current level of understanding, I definitely appreciate the breadth. Thanks for the videos, looking forward to learning more!

    • @deeplizard
      @deeplizard  3 роки тому

      Yes, regularization is covered in a later episode in the course :)
      deeplizard.com/learn/video/iuJgyiS7BKM

  • @Meldorius
    @Meldorius 4 роки тому +1

    Awesome!

  • @amirayasmine8490
    @amirayasmine8490 3 роки тому +1

    thank you

  • @abdulhameed-vo7jq
    @abdulhameed-vo7jq 5 років тому +1

    great videos, thank you.

  • @justchill99902
    @justchill99902 5 років тому +2

    Thank you :)

  • @harrisoncrettol
    @harrisoncrettol 2 роки тому

    At 2:15 you mention that if the learning rate is too high, you risk the possibility of overshooting the minimum. Does that mean that the loss would oscillate and not converge? Thanks for the help.

  • @zahidullah1707
    @zahidullah1707 3 роки тому +1

    Hi. It such a superb playlist. I am actually a beginner of DL. on 1:29 you mentioned a learning rate is a small number usually ranging from 0.01 and 0.0001, but the actual value can vary. Could you please tell me what is the actual value? Thanks

    • @deeplizard
      @deeplizard  3 роки тому +1

      Was just indicating that the learning rate value can vary anywhere within the range [0.0001, 0.01].

  • @ssffyy
    @ssffyy 4 роки тому

    I typed "model.optimizer.learning_rate" to display the value set for learning rate. Instead of providing the exact value (as shown in the video and the blog), I got the following output:

  • @chirag6108
    @chirag6108 5 років тому

    in every video I am really getting more and more interest like what is next with curiosity.
    How do we choose the Learning Rate, can you please share some detailed information? and some more information on choice of Activation function

  • @adhiththyagarajan6146
    @adhiththyagarajan6146 6 років тому +2

    Do we multiply the learning rate with gradient of the loss(d(Loss)/d(w)) and then subtract this value from the original weight value and then update the weight with the result? Is that how we do it? Thanks so much. Your videos are amazing. Very hard to find videos that actually help you understand neural networks so very well. Keep up the good work.

    • @deeplizard
      @deeplizard  6 років тому

      Hey Adhith - Thank you! I'm really glad to hear that the videos are helping you to understand neural networks.
      In regards to your question - Yes, you're absolutely right! To take it a step further, if you're interested in learning how the gradient itself is computed, this process is fully detailed in vids #27-31 of the playlist (ua-cam.com/play/PLZbbT5o_s2xq7LwI2y8_QtvuXZedL6tQU.html).

  • @aruneshwarv1051
    @aruneshwarv1051 3 роки тому

    Thankyou

  • @vertox4837
    @vertox4837 3 роки тому

    Something that you could also have said about lowering the learning rate is that you increase the risk of reaching a local minimum

  • @Dr.Turyalai
    @Dr.Turyalai 5 років тому

    how the learning rate is calculated for scaled conjugate gradient ?

  • @Atif-Khan
    @Atif-Khan 4 роки тому

    change the instruction model.optimizer.lr=0.001 ==> model.optimizer.lr.set_value(0.001) if you are getting error.

    • @deeplizard
      @deeplizard  4 роки тому +1

      Thanks for posting, Atif! Note that the corresponding blog for this video has been updated with the new way we should be calling and setting the learning rate, which differs from your change.
      deeplizard.com/learn/video/jWT-AX9677k
      The reason why is bc after becoming integrated with TensorFlow, Keras has changed to use learning_rate, rather than lr. lr is currently only being included for backward compatibility. This change was integrated into deeplizard Keras code in change # 6882488 shown on deeplizard.com/resources

    • @Atif-Khan
      @Atif-Khan 4 роки тому

      deeplizard thank you for posting these videos. I will have a look at the blog as well

  • @TheAmeer3881
    @TheAmeer3881 5 років тому +1

    Well crap... I pooped myself from the excitement..great content

  • @imbabywild
    @imbabywild 10 місяців тому

    What confuses me is that the derivative of a scalar is zero...

  • @BhanuTejapolukonda
    @BhanuTejapolukonda 6 років тому +1

    Request and suggestion
    Can u make a fingerprint feature extraction nd apply to transfer learning model,
    And also hyperparameter optimisation in deepcnn with Nelder mead, cuckoo evolutionary optimization algorithms. As Ur next video.

    • @deeplizard
      @deeplizard  6 років тому

      Hey Bhanu - Thanks for the suggestion! I have topics that are already in process and lined up for my upcoming videos, but I'll put these ideas on my list as possible topics to cover in future videos.

  • @hussamshamek6331
    @hussamshamek6331 6 років тому +1

    Could you please give us lesson on :
    Sequence to sequence learning with neural networks

    • @deeplizard
      @deeplizard  6 років тому +1

      Hey hussam - I'll add this to my list of topics to consider for future videos. Thanks for the suggestion!

  • @Arjun147gtk
    @Arjun147gtk 4 роки тому

    there should be an opption to like video on full screen.

    • @deeplizard
      @deeplizard  4 роки тому +1

      I begin zooming in to the code starting in later episodes. Much easier to read :)
      In the mean time, check out the corresponding blogs for each episode on deeplizard.com to see the written version for clearer reading.

  • @mohamedgamea9170
    @mohamedgamea9170 5 років тому

    Customer services

  • @aakashdewangan7313
    @aakashdewangan7313 4 роки тому

    Please use simple way to explain something..not the complicated way and complicated terms