I have never learned programming formally and I started by doing your coding challenges myself. Now I've started on this journey of neural networks because of you. Thank you so much.
I absolutely love this! Step one to NNs is always intuitive and easy, but then it gets complicated. Thanks for taking the time to explain so well an in so much depth!
Me and my friends would like to thank you for all your videos. You helped us so much for our Master of Science in this domain. You really helped us to understand the mathematical process and logic behind it. Thanks a lot ! You're amazing random engineering students
I'm so glad I found these videos. I have been struggling with Neural Networks for so long and it's difficult to keep up with other videos. You sir, are brilliant
It is fact that hard algorithms become easy once it gets teach by you. I really appreciate your efforts to make us understand beautiful concept in no time.
So many videos cover the code behind the neural network, but it’s so important to understand the fundamentals first before you can code one. There are not many videos that do a good job of explaining the simple math behind neural networks, I have looked. I’m glad I found this one.
It might be worth noting that a perceptron can act as a NAND gate, and the set {NAND} is functionally complete. That means that you can build up any logical computation out of solely NAND gates, therefore, a perceptron can be used to solve any logical expression.
2 suggestions first make a neural network that learns how to play flappy bird since you already have flappy bird. second make a neural network that teaches a vehicle to drive maybe also because Same reason
your teaching style really works well for me. I have been watching all your videos in this series. This particular video seemed to be a little repetitive at times and stretched to some extent though. But I understand your intent behind this to let all the viewers get a thorough understanding of flow of algorithm. After watching this there should be no one who can claim that he/she have not understood the algorithm! Keep it up!
Thanks for doing this video, i'm currently reading "Neural Networks and Deep Learning" by Michael Nielsen and you helped me clarify some math stuff i was struggling with! Greetings from Argentina!
You have the ability to teach the masses basic to intermediate coding to get us started. Are there or can you make a series strictly from beginning to employable, you would be the GOAT.
Kindly correct me if I am wrong. At 16:14, it is mentioned that: H[i] = W[i][j] . I[i] Shouldn't it be as follows: H[i] = W[i][j] . I[j] i.e., the second term on RHS should be I subscript j instead of I subscript i.
dot product was correct notation...(x, y) . (x1,y1) = (x*x1) + (y* y1) and you get a scaler which you pass through some activation function(relu/sigmoid). great video. thanks.
Why did the way that 'weights' are named change from Video 10.5 to this video 10.12? In the 10.5 video the numbers read from left to right, so that e.g. w12 means the weight from input1 to hidden2. However in this 10.12 video it's the opposite, read from right to left. So now w12 means the weight that's going from input2 to hidden1. Is this an error? Why did it change? Can anybody explain?
There were typos in your functions, 1) around 16:45, the subscript of X is wrong, the formula should be H_i = sigmoid(W_ij * X_j + B_i) 2) around 26:45, we only have one bias between the hidden layer and the output layer, so in the rightmost matrix, both should be b1.
Looking forward for the next video. However I like how you crack on new topics. I don't mind different notations as long as it's settled and make sense :)
The input layer is a layer but it's not counted as a layer when calculating the depth of a neural network. The depth of a neural network is calculated as -- number of hidden layers + 1 (for the output layer). So, the above one is a two-layer perceptron. Correct me if I'm wrong :)
Well how to have a weight matrix ...we do know how to change the weight after the backpropagation..but what about the weights during the feed-forward process. We do know that the matrix is random but is my question is that..are there any measures of that randomness of the weight?
hello, did you know anything about robust regression model which is MM-estimation? I need to make a hybrid model which is combination of MM-estimation with Artificial Neural Network
I did not understand the use of X3 to resolve XOR but i'm not a realy good english listener. Thanks for some clarification of what is an hidden layer. I would know how to use more than two layer and why using it.
Dan please do some videos on implementing hybrid machine learning models, how to implement collaborative filtering,rule based classification, association rules.
Can u plz explain how to group datasets on the basis of tissue subtype on the basis of feed forward neural network. Because I am doing a project on drug prediction.
I should first thank you for making us understood that why only single neuron will not work but to be very honestly I did not get that how two neurons will work ? .... These two neuron will have the output out of activation function but not able to imagine that how this is actually helping further to narrow down our problem...can there be a visual way of understanding what is happening after hidden layers are passing input to the next layers ?
This is confusing me in tensorflow the weight matrix is like rows=input and columns = hidden nodes while it's the opposite here what am I getting wrong?
what's your major? Right now I am doing my basics at a community college. Then I am going to transfer to a 4 yr university majoring in software engineering. I have to learn Java and C++ lol. I know it is going to be hard but it is possible.
Wouldn't it be easier if we multiplied V * W + B where V = [v1, v2, v3] instead of [ [v1], [v2], [v3] ] (so like transposed). It makes more sense to me that Values go through Weights instead of the other way :) Great series tho!
If an ANN is a universal function approximator, then I could make one with one input node, one output node, and some number of hidden nodes, and train it so approximate a function like sin, cos, tan, square, square root, etc.? I think I might just try that.
Not sure, but actually to have a 0 or 1 output I think sigmoid is not the right activation function, rather it should be a step function (Heaviside function). May be someone else may correct me if I'm wrong.
Fantastic! Great, thanks. I like your teaching, being a technical school teacher . Question: Do I have to assume or understand that ''weight'' could be also a ''gain'', like in an electronic circuit? (i.e.: amplification of a signal?)
Thank you. There is NO substitute for someone being able to WRITE on a board, and explain EVERYTHING that they do.
Love this guy. His energy. His explanation. Brilliant teacher
I have never learned programming formally and I started by doing your coding challenges myself. Now I've started on this journey of neural networks because of you. Thank you so much.
I absolutely love this! Step one to NNs is always intuitive and easy, but then it gets complicated. Thanks for taking the time to explain so well an in so much depth!
Me and my friends would like to thank you for all your videos. You helped us so much for our Master of Science in this domain. You really helped us to understand the mathematical process and logic behind it. Thanks a lot !
You're amazing
random engineering students
I'm so glad I found these videos. I have been struggling with Neural Networks for so long and it's difficult to keep up with other videos. You sir, are brilliant
It is fact that hard algorithms become easy once it gets teach by you. I really appreciate your efforts to make us understand beautiful concept in no time.
You have been able to explain the sigmoid function so fast but also so clear. Thank you so much
So many videos cover the code behind the neural network, but it’s so important to understand the fundamentals first before you can code one. There are not many videos that do a good job of explaining the simple math behind neural networks, I have looked. I’m glad I found this one.
It might be worth noting that a perceptron can act as a NAND gate, and the set {NAND} is functionally complete. That means that you can build up any logical computation out of solely NAND gates, therefore, a perceptron can be used to solve any logical expression.
2 suggestions first make a neural network that learns how to play flappy bird since you already have flappy bird. second make a neural network that teaches a vehicle to drive maybe also because Same reason
Great ideas!
so this is where it comes from..
Tesla: Delete dis now.
Yes I'd love to see you code a neural net controlled car
@@PSNDMII Tesla of Flatlands: Delete this now
I have no idea how long this specific outro has been a thing - but I love it.
your teaching style really works well for me. I have been watching all your videos in this series. This particular video seemed to be a little repetitive at times and stretched to some extent though. But I understand your intent behind this to let all the viewers get a thorough understanding of flow of algorithm. After watching this there should be no one who can claim that he/she have not understood the algorithm! Keep it up!
Great job !!!
You're explaining such complicated stuff from its basis in such a simple, modest and relaxed way.
Keep on this way man !!!
I cant tell you how much i enjoy these videos and how much im learning, thanks a lot!!!
Thank you for giving us a wonderful intuitive introduction to the world of Neural Networks!
Thanks for doing this video, i'm currently reading "Neural Networks and Deep Learning" by Michael Nielsen and you helped me clarify some math stuff i was struggling with! Greetings from Argentina!
so glad to hear!
Great video for a comprehensive revision before the exam. Thank you!
you are freaking awesome!!!!! I just love how you teach!
Amazing explanation. I am new to the field of data science and these videos about NN have been really helpful. Thank you sooo much
Hi, I am from Sri Lanka. Your are a good explainer. I am your big fan from today.
The way you saved my time, hats off.
Amazing video.
Incredibly helpful and great teaching style! Thank you
I just wanted to say that I love your videos, thanks for taking the time to put all of that together!
This was a great video. I just learned about the feedforward and the 'why' was missing. You really cleared things up.
I hope u do python, but I still love to watch ur tutorials
Big fan😊
You have the ability to teach the masses basic to intermediate coding to get us started. Are there or can you make a series strictly from beginning to employable, you would be the GOAT.
Kindly correct me if I am wrong.
At 16:14, it is mentioned that:
H[i] = W[i][j] . I[i]
Shouldn't it be as follows:
H[i] = W[i][j] . I[j]
i.e., the second term on RHS should be I subscript j instead of I subscript i.
The link for the next part leads to the previous video ^^'
This needs to be more popular than it is. Now I know how effective the click baits are! Phew
The funniest and the best explanation ever. Thank you!
Amazing video. Helps a lot in my Data structures course of project with deep learning theme
dot product was correct notation...(x, y) . (x1,y1) = (x*x1) + (y* y1) and you get a scaler which you pass through some activation function(relu/sigmoid). great video. thanks.
Why did the way that 'weights' are named change from Video 10.5 to this video 10.12?
In the 10.5 video the numbers read from left to right, so that e.g. w12 means the weight from input1 to hidden2.
However in this 10.12 video it's the opposite, read from right to left. So now w12 means the weight that's going from input2 to hidden1.
Is this an error? Why did it change? Can anybody explain?
27:29
"Acting!"
What was this?????
Your explanation is amazing. Thank you Sir.
Thanks Dan, you might have saved me from dropping an AI Deeplearning program after Day 1 videos.
@ 15:20 ... it should be j rows for the I (input)
There were typos in your functions,
1) around 16:45, the subscript of X is wrong, the formula should be H_i = sigmoid(W_ij * X_j + B_i)
2) around 26:45, we only have one bias between the hidden layer and the output layer, so in the rightmost matrix, both should be b1.
You really enjoy teaching!!!
Looking forward for the next video. However I like how you crack on new topics. I don't mind different notations as long as it's settled and make sense :)
Some may find your videos, notations or explanations difficult to follow. I find they really click with me! Must be my (or our??) brains. Cheers!
Thank you!
18:15 : I think it should be W[i][j] .I[j] + B[i] >>> Since i = {1,2} and j = {1,2,3}
yeah Neural Networks!!! I'm trying to build a NN but I don't know how to. I've been waiting or these for about 3 months. Thanks Dan!
perfect i will from now on switch in my standby mode and wait for the next live stream :o
complex things aint boring at all only at coding train! choo chooo
You make me laugh during learning which is interesting. Tnxxxxxx
The input layer is a layer but it's not counted as a layer when calculating the depth of a neural network. The depth of a neural network is calculated as -- number of hidden layers + 1 (for the output layer). So, the above one is a two-layer perceptron. Correct me if I'm wrong :)
Just took a programming midterm, I wonder what my teacher would think about neural networks after just starting off java with us.
You have much to learn young padawan.
Definitely!
2 years later, what's up?
Dude, your videos are amazing
You're an actual genius
Well how to have a weight matrix ...we do know how to change the weight after the backpropagation..but what about the weights during the feed-forward process. We do know that the matrix is random but is my question is that..are there any measures of that randomness of the weight?
Thank you really for your help, I am french I don't undersand all on 3blue1brown's videos,
I can't wait for backpropagation
pls try backpropagation on two hidden layer networks and find the value of z for 2nd layer weight
you need to come to Africa on my treat, you have taught me alot
hello, did you know anything about robust regression model which is MM-estimation? I need to make a hybrid model which is combination of MM-estimation with Artificial Neural Network
I did not understand the use of X3 to resolve XOR but i'm not a realy good english listener. Thanks for some clarification of what is an hidden layer. I would know how to use more than two layer and why using it.
Dan please do some videos on implementing hybrid machine learning models, how to implement collaborative filtering,rule based classification, association rules.
thank you sir you are amazing (warm regards from iraq to you )
Excellent searies,, keep good works
thank you dan , you're the best
Hey where's my comment thread on rather its a two layer or three layer?
Is x3 in this XOR example not already the bias for Input->Hidden?
Can u plz explain how to group datasets on the basis of tissue subtype on the basis of feed forward neural network. Because I am doing a project on drug prediction.
I should first thank you for making us understood that why only single neuron will not work but to be very honestly I did not get that how two neurons will work ? .... These two neuron will have the output out of activation function but not able to imagine that how this is actually helping further to narrow down our problem...can there be a visual way of understanding what is happening after hidden layers are passing input to the next layers ?
This is confusing me in tensorflow the weight matrix is like rows=input and columns = hidden nodes while it's the opposite here what am I getting wrong?
Adorei a sua explicação! Muito obrigado!🤗
Very clear, great video!
I am glad to hear this b/c I felt so unsure about this video!
I guess we needed more about the values for the bias variables
How would that Algorithm work if i had 2-Dimensional Layers?
Perfect!
a good addition would be to add multiclass classification into this lecture.
Why didn't you make videos about NN when I was studying them back in 2013? Such a delight way to understand something so complicated!
what's your major? Right now I am doing my basics at a community college. Then I am going to transfer to a 4 yr university majoring in software engineering. I have to learn Java and C++ lol. I know it is going to be hard but it is possible.
Wouldn't it be easier if we multiplied V * W + B where V = [v1, v2, v3] instead of [ [v1], [v2], [v3] ] (so like transposed). It makes more sense to me that Values go through Weights instead of the other way :) Great series tho!
If an ANN is a universal function approximator, then I could make one with one input node, one output node, and some number of hidden nodes, and train it so approximate a function like sin, cos, tan, square, square root, etc.? I think I might just try that.
let me know how it goes!
wow interesting and fun lecture i love it!
Hi, Just wondering if you will do something on the Hopfield network? It is a great updating network! I hope so.. *fingers cross*
If the output of sigmoid ( activation function) is a value between 0-1,(like 0.79) is it necessary we round it ?
i mean :
sig =>0.5 --->=1
sig =0
Not sure, but actually to have a 0 or 1 output I think sigmoid is not the right activation function, rather it should be a step function (Heaviside function). May be someone else may correct me if I'm wrong.
If the bias weight is 0 then doesn't the whole purpose of a bias gets removed?
It went right into my head :)
why we have to multiply input value with weight?
Please do a series on unsupervised learning using autoencoders ....
FYI. Couldnt find the link for the book.
OK. Sorry. Found the Amazon link. Should listen to what you say in the video. :-)
Thanks, this is really neat :)
Great video!
Interesting video !
Fantastic! Great, thanks. I like your teaching, being a technical school teacher . Question: Do I have to assume or understand that ''weight'' could be also a ''gain'', like in an electronic circuit? (i.e.: amplification of a signal?)
Yes, I think that's a great way of thinking about it!
can someone explain why xor can't be achieved by one perceptron? l
I fucking love your videos man. Thank you for everything
Are you going to cover LSTM RNN also?
I hope so.
Would have been nice to have the XOR problem actually explained.
thanks for the clarifications
Trying to learn as much programming as I can before college to get ahead of everyone.
If you get this lesson you're so much ahead -> of professionals :D
7:22 lol
It was so funny that I replayed it several times :D
me too, LOL
You look like Gilfoyle from Silicon Valley
Don't insult my man like that, he is a humble guy
Can u do .. simple exponential smoothing ?
Maanlamp umm i mean a forecasting method .. single, double or triple exponential ..
Is it same like u explain to me ?
Thanks before
great work
How I chose wight pls
5 minute enough. But 27:40. Are you alone?
thank you