The structure of this playlist is superb. It throws out concepts that are not familiarized while giving you the bigger picture, so it keeps you wanting for more, and solves these mysteries one by one at a later stage. Very smart, this is a golden way to approach education in general, too sequentialized structure is a bit boring.
{ "question": "The learning rate determines in part the size of the:", "choices": [ "Adjustment made to the weights", "Neural network", "Hidden layers", "Hyperparameters" ], "answer": "Adjustment made to the weights", "creator": "Chris", "creationDate": "2019-12-11T04:13:32.163Z" }
I noticed you use a regularizer in your second hidden layer. Didn't see that in the previous video. According to some online sources, it seems to be used to handle overfitting. I'm not sure why it's 0.01, but I'm hoping this is covered in one of your future videos. explanations are brief and understandable, and although there doesn't appear to be insane depth, with my current level of understanding, I definitely appreciate the breadth. Thanks for the videos, looking forward to learning more!
At 2:15 you mention that if the learning rate is too high, you risk the possibility of overshooting the minimum. Does that mean that the loss would oscillate and not converge? Thanks for the help.
Hi. It such a superb playlist. I am actually a beginner of DL. on 1:29 you mentioned a learning rate is a small number usually ranging from 0.01 and 0.0001, but the actual value can vary. Could you please tell me what is the actual value? Thanks
I typed "model.optimizer.learning_rate" to display the value set for learning rate. Instead of providing the exact value (as shown in the video and the blog), I got the following output:
in every video I am really getting more and more interest like what is next with curiosity. How do we choose the Learning Rate, can you please share some detailed information? and some more information on choice of Activation function
Do we multiply the learning rate with gradient of the loss(d(Loss)/d(w)) and then subtract this value from the original weight value and then update the weight with the result? Is that how we do it? Thanks so much. Your videos are amazing. Very hard to find videos that actually help you understand neural networks so very well. Keep up the good work.
Hey Adhith - Thank you! I'm really glad to hear that the videos are helping you to understand neural networks. In regards to your question - Yes, you're absolutely right! To take it a step further, if you're interested in learning how the gradient itself is computed, this process is fully detailed in vids #27-31 of the playlist (ua-cam.com/play/PLZbbT5o_s2xq7LwI2y8_QtvuXZedL6tQU.html).
Thanks for posting, Atif! Note that the corresponding blog for this video has been updated with the new way we should be calling and setting the learning rate, which differs from your change. deeplizard.com/learn/video/jWT-AX9677k The reason why is bc after becoming integrated with TensorFlow, Keras has changed to use learning_rate, rather than lr. lr is currently only being included for backward compatibility. This change was integrated into deeplizard Keras code in change # 6882488 shown on deeplizard.com/resources
Request and suggestion Can u make a fingerprint feature extraction nd apply to transfer learning model, And also hyperparameter optimisation in deepcnn with Nelder mead, cuckoo evolutionary optimization algorithms. As Ur next video.
Hey Bhanu - Thanks for the suggestion! I have topics that are already in process and lined up for my upcoming videos, but I'll put these ideas on my list as possible topics to cover in future videos.
I begin zooming in to the code starting in later episodes. Much easier to read :) In the mean time, check out the corresponding blogs for each episode on deeplizard.com to see the written version for clearer reading.
This is definitely the most underrated channel when it comes to machine learning of all time.
Yeah and I hope that UA-cam's machines can learn about it, and start recommending it more! 😎
The structure of this playlist is superb. It throws out concepts that are not familiarized while giving you the bigger picture, so it keeps you wanting for more, and solves these mysteries one by one at a later stage. Very smart, this is a golden way to approach education in general, too sequentialized structure is a bit boring.
Hey Ziqiang - Thank you! I'm really glad to hear you like the teaching style and that you're finding value in the content!
This channel is rather a hidden gem. your explanations clears a lot! It is precise and simple, love it!
I love how you use actual Python code to explain it! It helps me get familiar with the Keras library used for Neural Networks too!
Thank you very much for this series. Your tutorial is professionally ordered and delivered clearly and coherently. You are a natural teacher:)
These videos are amazing. I love the short format. They contain a balance between theory, mathematics, and code and they are very engaging. Great job!
wow, this channel is seriously underrated....thanks a lot for such great content!
Check out the corresponding blog and other resources for this video at: deeplizard.com/learn/video/jWT-AX9677k
Your Keras playlist...FANTASTIC!
One of the best videos to learn deep learning
LOVED THE WAY YOU TEACH....
Learning rate? More like "Whoa, these videos are great." Thanks again for sharing!
love all the videos. Thank you so much for putting these together.
You are welcome!
Really nicely explained! Thanks!
I must say, you have a lovely voice!!
{
"question": "The learning rate determines in part the size of the:",
"choices": [
"Adjustment made to the weights",
"Neural network",
"Hidden layers",
"Hyperparameters"
],
"answer": "Adjustment made to the weights",
"creator": "Chris",
"creationDate": "2019-12-11T04:13:32.163Z"
}
Thanks, Chris! Just added your question to deeplizard.com
very concise ,thank u
Your videos are so very helpful! Thank you!!
More power to you. This is so good.
Thanks for your videos .
Just awesome... Love you... Keep going... ✌🙌
thank you very much for this clear and helpful explanation.
really fruitful ... best for your effort
I noticed you use a regularizer in your second hidden layer. Didn't see that in the previous video. According to some online sources, it seems to be used to handle overfitting. I'm not sure why it's 0.01, but I'm hoping this is covered in one of your future videos.
explanations are brief and understandable, and although there doesn't appear to be insane depth, with my current level of understanding, I definitely appreciate the breadth. Thanks for the videos, looking forward to learning more!
Yes, regularization is covered in a later episode in the course :)
deeplizard.com/learn/video/iuJgyiS7BKM
Awesome!
thank you
great videos, thank you.
Thank you :)
At 2:15 you mention that if the learning rate is too high, you risk the possibility of overshooting the minimum. Does that mean that the loss would oscillate and not converge? Thanks for the help.
Hi. It such a superb playlist. I am actually a beginner of DL. on 1:29 you mentioned a learning rate is a small number usually ranging from 0.01 and 0.0001, but the actual value can vary. Could you please tell me what is the actual value? Thanks
Was just indicating that the learning rate value can vary anywhere within the range [0.0001, 0.01].
I typed "model.optimizer.learning_rate" to display the value set for learning rate. Instead of providing the exact value (as shown in the video and the blog), I got the following output:
in every video I am really getting more and more interest like what is next with curiosity.
How do we choose the Learning Rate, can you please share some detailed information? and some more information on choice of Activation function
Do we multiply the learning rate with gradient of the loss(d(Loss)/d(w)) and then subtract this value from the original weight value and then update the weight with the result? Is that how we do it? Thanks so much. Your videos are amazing. Very hard to find videos that actually help you understand neural networks so very well. Keep up the good work.
Hey Adhith - Thank you! I'm really glad to hear that the videos are helping you to understand neural networks.
In regards to your question - Yes, you're absolutely right! To take it a step further, if you're interested in learning how the gradient itself is computed, this process is fully detailed in vids #27-31 of the playlist (ua-cam.com/play/PLZbbT5o_s2xq7LwI2y8_QtvuXZedL6tQU.html).
Thankyou
Something that you could also have said about lowering the learning rate is that you increase the risk of reaching a local minimum
how the learning rate is calculated for scaled conjugate gradient ?
change the instruction model.optimizer.lr=0.001 ==> model.optimizer.lr.set_value(0.001) if you are getting error.
Thanks for posting, Atif! Note that the corresponding blog for this video has been updated with the new way we should be calling and setting the learning rate, which differs from your change.
deeplizard.com/learn/video/jWT-AX9677k
The reason why is bc after becoming integrated with TensorFlow, Keras has changed to use learning_rate, rather than lr. lr is currently only being included for backward compatibility. This change was integrated into deeplizard Keras code in change # 6882488 shown on deeplizard.com/resources
deeplizard thank you for posting these videos. I will have a look at the blog as well
Well crap... I pooped myself from the excitement..great content
💩
What confuses me is that the derivative of a scalar is zero...
Request and suggestion
Can u make a fingerprint feature extraction nd apply to transfer learning model,
And also hyperparameter optimisation in deepcnn with Nelder mead, cuckoo evolutionary optimization algorithms. As Ur next video.
Hey Bhanu - Thanks for the suggestion! I have topics that are already in process and lined up for my upcoming videos, but I'll put these ideas on my list as possible topics to cover in future videos.
Could you please give us lesson on :
Sequence to sequence learning with neural networks
Hey hussam - I'll add this to my list of topics to consider for future videos. Thanks for the suggestion!
there should be an opption to like video on full screen.
I begin zooming in to the code starting in later episodes. Much easier to read :)
In the mean time, check out the corresponding blogs for each episode on deeplizard.com to see the written version for clearer reading.
Customer services
Please use simple way to explain something..not the complicated way and complicated terms