If you found this video helpful, then hit the *_like_* button👍, and don't forget to *_subscribe_* ▶ to my channel as I upload a new Machine Learning Tutorial every week.
@@arpit743 Yes, but not entirely. Multiple neurons allow us to capture complicated patterns. A single neuron won’t be able to capture complicated patterns from the dataset.
@@arpit743 more refined outputs will allow to see the limitations to your network boundaries...and hence u can pinpoint at exact location and correct it as per your needs. It doesn't allows for complicated boundaries , u are ALLOWED to see your complicated boundaries,and hence work thru it
Sir, their is a mistake on timestamp 0.41 where a2[1] is wrong , A2[2] is the activation function you written second becouse the value you multiply with the weight in weighted sum those value of a is for first hidden layer which use to find the second hidden layer value A2[2] not a2[1]. But, you really teach great. thank you...
I've always felt as if I was on the cusp of understanding neural nets but this video brought me past the hump and explained it perfectly! Thank you so much!
I am sure he is talking about Andrew Ng Lol. His explanation on that video is too detailed and the notations are too confusing lol. But the same explanation in his Machine Learning Specialization course is much better.
best explanation, best playlists I don't usually interact with the algorithm much by giving likes and dropping comments or liking but you beat me into submission with this. Hopefully I understand the rest of it too lol.
Okay Sure ! Thank you so much for your suggestion. I have been asked alot, to make video on SVM. So, I will try to make it just after finishing this Neural Network playlist .
Your videos on neural networks are really good. Can you please also upload videos for generalized neural networks too, that would really be helpful P.S Keep Up the good work!!!
Hi... I have taken the shape of W as (n_h, n_x). Thus equation will be Z = W.X + B. But if you take W as (n_x, n_h), then equation of Z = transpose(W).X + B. Both represent the same thing. Hope it helps you.
Small doubt, what is f(z1)...I am assuming these are just different type of activation functions...where input is just the weight of current layer*input from previous layers...is that correct?
where did came from the algorithm that calculates the next W in 5:30 ? I know it is intuitive, but does it have something to do about Euler's method ? Or another one ? Thank you so much for these incredible videos
Hi.. somehow captions were not generated in this video. All my ohter videos do have caption. I will change the settings to bring caption in this video as well. Thanks for bringing this to my attention.
You will get all the information in upcoming videos that I have already uploaded in this series. If you still have questions, then you can write me mail on : codeboosterjp@gmail.com
If you found this video helpful, then hit the *_like_* button👍, and don't forget to *_subscribe_* ▶ to my channel as I upload a new Machine Learning Tutorial every week.
Excellent video! .Bro why do we have multiple nuerons in every hidden layer. is it from the point of view of introducing non linearity?
@@arpit743 Yes, but not entirely. Multiple neurons allow us to capture complicated patterns. A single neuron won’t be able to capture complicated patterns from the dataset.
@@MachineLearningWithJay thanks alot! but why is that it allows for complicated boundaries?
@@arpit743 more refined outputs will allow to see the limitations to your network boundaries...and hence u can pinpoint at exact location and correct it as per your needs. It doesn't allows for complicated boundaries , u are ALLOWED to see your complicated boundaries,and hence work thru it
Sir,
their is a mistake on timestamp 0.41 where
a2[1] is wrong , A2[2] is the activation function you written second becouse the value you multiply with the weight in weighted sum those value of a is for first hidden layer which use to find the second hidden layer value A2[2] not a2[1].
But, you really teach great.
thank you...
I've always felt as if I was on the cusp of understanding neural nets but this video brought me past the hump and explained it perfectly! Thank you so much!
I am really elated hearing this. Glad if helped you out. Thank you so much for your appreciation. 🙂
this video should be titled " Explain - Forward and Backward Propagation - to Me Like I'm Five. Thanks man you saved me a lot of time.
One of the Best Comments I have seen. Thank you so much! And thanks for the title idea 😂😄
One of the Best Comments I have seen. Thank you so much! And thanks for the title idea 😂😄
Boy you have a wonderful way of explaining things. Thanks loadsss :))
@@hajiclub haha… thank you so much! Means a lot!
Literally best. Crisp and clear!! Thank you
You explain better than popular course instructor on deep learning
Thanks for the compliment 😇
I am sure he is talking about Andrew Ng Lol. His explanation on that video is too detailed and the notations are too confusing lol. But the same explanation in his Machine Learning Specialization course is much better.
Thanks a lot for This Amazing Introductory Lecture 😊
Lecture - 2 Completed from This Neural Network Playlist
you explained in very clear and easy ways. Thank you, this is so helpful!
Your welcome!
so glad I found this channel!!
Thank you! I appreciate your support 😇
best video on youtube for this topic
Thank you so much. Much appreciate your comment! 🙂
This was more helpful than my lectures!
Glad to help!!
Very informatics video.Explained all the terms in a simple manner.Thanks alot
Absolutely loved the way you explain. So easy to understand. Thank you
Best explanation I've seen so far
Im a bit confuse through the exponent notations since some of it were not corresponding to the other
Super sir. I have learned more information from this and also calculation way. It's very useful to our study. Thank you sir
Happy to help!
Thanks man. The slides were amazingly put up.
Thank you so much!
You are great. It will be very good if you continue.
Thank you for your support! I will surely continue making more videos.
Top Class Explanation!
Glad it was helpful!
Such a simple and neat explanation.
Thank you!
best explanation, best playlists
I don't usually interact with the algorithm much by giving likes and dropping comments or liking but you beat me into submission with this. Hopefully I understand the rest of it too lol.
Nicely explained. Keep up the good job!
Very helpful and to the point and correct!
Great video, and great explanation thanks dude!
Your Welcome!
Excellent video sir
@@blackswann9555 Glad I could help!
at 0:58 in a1[1] = activation(....), last sum should be W13[1]*a3[0] not W13[1]*a3[1]
I was just trying to figure this out. I agree, the slides previous have the wrong notation as well.
This is so well explained.. thankyou
you are really awesome. love your teaching ability
Thank you so much !
@@MachineLearningWithJay, you are most welcome bro. Please make the implementation of Multiclass Logistics Regression using OnevsAll/OnevsOne method
@@mdtufajjalhossain1246 Okay! Thanks for suggesting!
This was actually pretty straight forward
Glad if it helped you!
Fantastic explanation. Thank you
Awesome, really helpful! Thank you
Your welcome!
Good Explanation !!
Thank you!
Excellent explanation jazakallah bro
great video,
Please also make a video on SVM as soon as possible
Okay Sure ! Thank you so much for your suggestion. I have been asked alot, to make video on SVM. So, I will try to make it just after finishing this Neural Network playlist .
Amazing work, keep it going :)
Thank You!
great video as always
Thank You soo much !!!
You drop something ... 👑
haha.. what is it? Thanks btw
Your videos are very helpful. It will be great if you sort the video..Thank you😇😇😇
thank u sir it was really helpful
Your welcome!
Your videos on neural networks are really good. Can you please also upload videos for generalized neural networks too, that would really be helpful P.S Keep Up the good work!!!
Thank you so much for your feedback. I will surely consider making videos on generalized neural networks.
Good job. But Gradient descent W2 and W1 mus be updated simultaneously.
Thank you! Yes they should be updated simultaneously.
Brother your explanation was great but there are some mistakes i have pointed out.
Isnt the equation : Z= W.X+B = transpose(W)*X + B.Hence the weight matrix what you have given is wrong right?
Hi... I have taken the shape of W as (n_h, n_x). Thus equation will be Z = W.X + B. But if you take W as (n_x, n_h), then equation of Z = transpose(W).X + B.
Both represent the same thing. Hope it helps you.
@@MachineLearningWithJay thanks for the quick clarification.makes sense now.keep up the great work!!
great video
Small doubt, what is f(z1)...I am assuming these are just different type of activation functions...where input is just the weight of current layer*input from previous layers...is that correct?
Yes correct… but do check out the equations properly. It has bias also.
@@MachineLearningWithJay Thanks for your prompt response.
Super Bro❤❤❤❤
where did came from the algorithm that calculates the next W in 5:30 ? I know it is intuitive, but does it have something to do about Euler's method ? Or another one ?
Thank you so much for these incredible videos
Sir it's W ¹¹[¹] * a⁰[1] right? You've done it as W ¹¹[¹] * a¹[1] at the matrix multiplication, can you just verify I'm wrong?
Yes… there is a typo error
B1 and B2 are initialized randomly too ?
hi can you put caption option
Hi.. somehow captions were not generated in this video. All my ohter videos do have caption. I will change the settings to bring caption in this video as well. Thanks for bringing this to my attention.
Can A* actually be Z*, e.g. A1 = Z1?
No, we need to apply a non-linear activation function. So A1 must be = some_non_linear_function(Z1)
hi, how to calculate the cost?
You will get all the information in upcoming videos that I have already uploaded in this series.
If you still have questions, then you can write me mail on : codeboosterjp@gmail.com
what is this B1
Please share code algorithm backpropagate
y no subtitles?
wait you haven't explained backpropagation at all
Extremely confusing tutorial and there's a mistake
This should be :
A[3]⁰ not A[3]¹
let bro cook
5:04
Lord Jay Patel
You dont understand yourself
Hey, may I know what is the reason you felt that way?