If you found this video helpful, then hit the *_like_* button👍, and don't forget to *_subscribe_* ▶ to my channel as I upload a new Machine Learning Tutorial every week.
First of all, thank you very much for these videos. I have a question about cross entropy. I understand how cross entropy works. I don't understand why it works. I would appreciate it if you make videos about these topics.
@10:18 woudn't it be "both the tanh and sigmoid function (and not 'Relu') had this disadvantage of vanishing gradient prob..."... Relu is it's solution right?
Does relu makes the f(x)=0 even if the x is very small but >0? because tanh/sigmoid the rate of change of gradient becomes very small but still >0, whereas in the relu the f(x) seems to be 0 only when x
Hi Suraj, I would highly recommend you to take the coursera course from Andrew Ng for Deep Learning. Here’s its link : www.coursera.org/specializations/deep-learning This course is for absolute beginners and you will develop better understanding of deep learning. Also, if you feel like you can’t afford it, there are ways on coursera to take courses for free.. like Auditing or applying for financial aid. I hope you find Deep Learning interesting !!
If you found this video helpful, then hit the *_like_* button👍, and don't forget to *_subscribe_* ▶ to my channel as I upload a new Machine Learning Tutorial every week.
please indicate your variables or with pictures to explane what is Z and A ..etc
@@lamis_18 okay
You explain these concepts more completely and simply than any other video I’ve seen. Thank you
Just started this playlist and found it very well explained. Thank you for great work.
Thank you!
Thanks a lot for This Amazing Introductory Lecture 😀
Lecture - 3 Completed from This Neural Network Playlist
Than you so much, youve made it so easy to understand. I have exams tomorrow. You saved me, God Bless!
You're welcome! I am glad that the video helped you. Hope your exam went well!
damn, this is lowkey a really good and insightful way of explaining this. I'll be sharing with my students. Exceptional tutorial
Softmax function now makes sense. thanks!
@@blackswann9555 haha, glad I could help! Thank you so much!
Well and brief explanation of the activation functions Sir Patel, wonderful, I am acquiring new knowledge from your every videos, good, great going
Thank you so much ! I am glad you found my videos helpful.
explained very crisply ,very helpful! thanks
@@s.rt_ thank you! Glad to help.
Very underrated channel. Great explanation!
You are saving me rn from my midtrem tomorrow. Thank you!!!'
Really happy to hear this. Glad the videos helped you! 🙂
will this explanation be enough for a beginnner in ML? I understood what you have explained .iam learnign from you .Thank you.
thanks a lot for sharing! really helped me understanding why and when using which activation function. Very good!
very very very useful for me. Thank you
Glad I could help!
Amazing Explanation, just one mistake at 10:16 to 10:24 that should be "Sigmoid and TanH" not "ReLU and TanH"...
Perfect explanation. thank you. keep going
Great work bro 👍
First of all, thank you very much for these videos. I have a question about cross entropy. I understand how cross entropy works. I don't understand why it works. I would appreciate it if you make videos about these topics.
Thanks for the suggestion. Will try to cover this topic.
very well explained, thanks so much for the video
Amazing explanation
Thank you!
Very well explained
Thank you! Glad it was helpful!
well explanation brother. keep it up
@10:18 woudn't it be "both the tanh and sigmoid function (and not 'Relu') had this disadvantage of vanishing gradient prob..."... Relu is it's solution right?
Good summary, thank you
Keep it up bro, nice explaination ✅
Thank you!
soooooo grateful for you
PERFECT
Thank you!
Great explanation 🔥👏🏻
Does relu makes the f(x)=0 even if the x is very small but >0? because tanh/sigmoid the rate of change of gradient becomes very small but still >0, whereas in the relu the f(x) seems to be 0 only when x
Is it possible for you to add/share further reading documents ?
how does relu solve the vanishing gradient problem since some part of the gradient is zero for x < 0?
which one is a non-symmetric activation function ?
Hey bro
I am beginner learning deep learning,Can you suggest me any materials to learn deep learning from scratch?
Hi Suraj, I would highly recommend you to take the coursera course from Andrew Ng for Deep Learning.
Here’s its link : www.coursera.org/specializations/deep-learning
This course is for absolute beginners and you will develop better understanding of deep learning.
Also, if you feel like you can’t afford it, there are ways on coursera to take courses for free.. like Auditing or applying for financial aid.
I hope you find Deep Learning interesting !!
@@MachineLearningWithJay Thank you brother 😄for your suggestion
💐💐💐
😇😇
I think I’m in love with you
Jk but you’re amazing
Hahahaha!! Pleasure!
Bhai hindi me kyu koi smajhata nahi 🤢🤢🤢🤢🤢 🤮🤮🤮
Try Code Basics Hindi channel. Shayad aapko achaa lage
@@MachineLearningWithJay bhai woh softmax function me e kya hai aur unn e ki value kaise nikaale , aur unhe plus karke 0.9 something kaise aa raha hai
Try to talk your normal accent
Great explanation 🔥👏🏻
Thanks!