When our teacher taught us this, it just went over my head but now, thanks to your great explanation, I finally understood the concept of gradient descent. Thank you so much.
What if the graph (Loss function vs weight) has a local minima, the algorithm wouldn't update the parameters.Is there any way out then to reach the global minima of the graph?
That's really a good question. Kindly refer the following article: www.spiedigitallibrary.org/conference-proceedings-of-spie/3066/0000/Techniques-for-avoiding-local-minima-in-gradient-descent-based-ID/10.1117/12.276095.short
When our teacher taught us this, it just went over my head but now, thanks to your great explanation, I finally understood the concept of gradient descent. Thank you so much.
Thank you so much . I m learning Data Science through your channel . All topics are well explained . keep making videoes . Thank You .
The meaning of bias is different here as compared to the one where you explained in "Bias variance tradeoff" right?
Great sir, clearly explained
Excellent walk through!
@2:55 getting job or not is a classification problem .So how can we use regression model??please say sir
Really great tutorial thanks
What if the graph (Loss function vs weight) has a local minima, the algorithm wouldn't update the parameters.Is there any way out then to reach the global minima of the graph?
That's really a good question. Kindly refer the following article: www.spiedigitallibrary.org/conference-proceedings-of-spie/3066/0000/Techniques-for-avoiding-local-minima-in-gradient-descent-based-ID/10.1117/12.276095.short
Thanks
Welcome
How to get the full course?
Sir, at least explain why the value of loss function is high when taking less or more weight
Thanks
Thanks