Polycode Can the neurons and inputs be placed together, like neurons with much built in data?? Also I need a very powerful neural network for several different purposes, speech, faceID and math solving problems, do you have something that you made that is open source that you can share with me??
"stay with me, it's gonna be ok"... dude, that's such a lovely sentiment. You were born to teach I think, with that ability to keep pupils onboard. Very good video my man, thank you so much..
My friend, your explanation in 15 minutes gave more clarity to me than hours of crash course tutorials online. So simple and well explained. Awesome stuff my man!
After watching hyper-advanced tensorflow/keras stock market prediction tutorials for a while, being completely lost, I stumbled on this. I finally, after weeks of trying to learn NN and decades of practical programming experience, understand it. The iterative backpedaling was what confused me with all of those other videos, but taken down to its most simple form, like in this video, I can now see that it's merely looking at what it got, what it was trying to get and make adjustments to the appropriate synapses based on that, then trying again. It's not the maths that confused me, it's how the machine actually learned. And that was perfectly demonstrated in this video. Thank you!
I agree too. So many videos complicate and dance around simple mechanics. Knowing the flow of the engine and the simple concept of what is happening, the other videos might make more sense now that I can put it into context.
I watched a lot of videos about Machine Learning because I wanted to unterstand how that works. Non of these Videos explained so good like yours how a neuron and the adjustment actually works. Good work, now I finally understood it.
What a fantastic way of explaining it. Whilst this is obviously not immediately useful, It's a sort of toy approach that gives you a building block to understand the greater scope.
Line no 16 : synaptic_weights=2 * np.random.random((3,1))-1 this line makes an array of 3X1 or a matrix of size 3X1. I did not understand this line before I tried this line separately. This makes an easy grasp of the random concept, but as I learned in Soft Computing in my Btech, you can directly initialize the weights as 1, which will then get adjusted during training. you can also replace the line with it : synaptic weights=np.array([[1,1,1]]).T THANKS TO YOU for making this short and easy tutorial!
I joined my university 2 months late, absolutely had no idea how to learn the lost neural network project topic and then I saw your video !!! Thanks a lot dude !!! For saving my semester HAHAHA
Just a note on sigmoid_derivative, for myself as much as anyone else. Since you're inputting the output of sigmoid to sigmoid_derivative, he's using that sigmoid satisfyies the differential equation y'(x) = y * (1 - y) so we can compute the derivative sigmoid'(x) by inputing sigmoid(x) into [y --> y(1-y)]. That's very clever!
But you should run the outputs through the sigmoid derivative, right? And the outputs are sigmoided by default, so shouldn't you use the sigmoid twice?
Wow... The perfect tutorial.. I have been searching in the internet for a tutorial on how to make neural networks from scratch . now I got it.. this is soo cool... Very detail explanation...
Thank you very much. I constantly see these videos about the theory of Machine Learning and AI but I have never found an in-depth start from scratch tutorial with mo libraries, all while explaining everything. Thank you!
Thanks for the video, I try to follow this but I see the solution can be other way in binary logic, the first column is multiplied by the sum of the two other columns, not only first column is what decides the output but the others also as bellow. if we take this table at 0:20 Example 1: 0x(0+1)=0 Example 2: 1x(1+1)=1 Example 3: 1x(0+1)=1 Example 4: 0x(1+1)=0 New situation: 1x(0+0)=0
Superb! Using the seeded weights so that you and the viewer get the same results was a brilliant touch. Helps the viewer know if he miscoded or not. Thanks.
I have been looking for a toy example of Neural Networks, thanks to your video I get to see one. Your video is very concise. Thank you. Also, thank you for sharing your Python code.
Thank you for very useful insigth into what is behind the neural network. At 10:00: (the derivative of a sigmoid function)=(sigmoid funcion)*(1-sigmoid function) and not x(1-x)
I watched lot of Ann videos on UA-cam, and all of them missing something which I am not getting But thanks to you I got what I need. Especially explaining the working. Thank u again
At 0:40 , The output depends on both first and last input not only on first. If i label the inputs a,b,c from left to right respectively, then according to the 4 states truth function, the output is O= abc + ab'c =ac(b+b') =ac. So nn output for 100 input should be 0.
Hey dude just saw this video from your post on /r/programming - This video is awesome! You're great at explaining everything. Neural nets can sometimes be confusing but this makes a lot of sense to me. Thanks so much!!
@@sonic597s dont get it. He still uses x(1-x) which has nothing to do with sigmoid, but it is just an approximation to the shape of the curve (signs are opposit)
@@pluronic123 a derivative finds the slope of the line at some given point. the sigmoid derivative being the formula x(1-x) (where x is the sigmoid fn.) means that if you were to plug in some sigmoid function given some value (z) as x, you would get the slope of the sigmoid fn at that value (z)
Thanks for great video! Possible code to find output for [1,0,0] : p_in=np.array([1,0,0]) p_out=sigmoid(np.dot(p_in, synaptic_weights)) print("Predicted Output After Training:") print(np.round(p_out)) => Predicted Output After Training: [1.]
So i tweeked training outputs to 1,1,1,0 with an interation in range of 100,000 and the computer gave me a perfect answer to the third output of 1. The other outputs where close to true answers but i didn't think the computer could give a 100% true answer. I guess im confused that it didn't take that many training loops to give that answer. Btw great video finally got me to get the computer out and start!
Thanks this was so helpful it really cleared up a lot of my questions about the topics other videos said let’s not talk about that yet..., thanks again these videos are super helpful keep up the amazing work
Great tutorial, but I might have used a different approximation for d-sigmoid. I'm not sure where you got x(1-x) from as an approximation- it does not share a derivative with d-sigmoid and the vertex is off in space. I'm not sure if it is a standard to use and I'm just misunderstanding (I'm watching this tutorial to learn, after all), but I did a quick Taylor polynomial approximation and got the function: d-sigmoid ~= (2 - x^2) / 8 -------This won't work very well for things not centered at x = 0 This is about the same in terms of typing effort and computer processing, but a little more accurate. It is also based around x = 0 so it won't be biased towards one outcome (unless you built a weight into your function, in which case it makes a lot of sense). You can continue on to the 4th derivative in the series and add a third term which doesn't factor as nice but is extremely accurate (+/- 0.001) on the domain -1
FINALLY!!.... I have been looking for such tutorial which teaches from scratch... That's Very good of you to do so... Keep it up bro.. Make more videos like this... BTW I am new to your channel. Just subscribed
You are my hero! My prof is so bad explaning exact the same things over I guess 4 or 5 lesson of 3 hours each. And you just need some minutes ... haha I subscirbed you immediately. I need more of it!
Incredible! I think this is the first video that has helped me understand the formulas behind a neural network! However, I was wondering how you implement the calculation of biases into the actual code and Backpropagation steps and formula?
I changed the sigmoid derivative function to this and got better results in less tries and this is the actual derivative of a sigmoid function: def sigmoiddeivative(x): return np.exp(-x) / ( pow( ( 1 + np.exp(-x) ), 2) )
This is indeed a better derivative, good job! For the purposes of simplicity though I have kept the less complicated function since it's almost the same shape. Yours is better though.
For this simple problem backpropagation is not needed. The gradient formula can be computed analytically and would reduce the training iterations a lot. (I achieved high confidence with 500 iterations only)
@@JonasBostoen I'm very late to the party, but since we need a random number between -1 and 1, wouldn't it be better to add two random numbers, then substract 1, or does it matter?
The derivative of the sigmoid is: σ'(x) = σ(x)(1 - σ(x)). It took me a while to understand what you meant at 9:33, maybe you should consider adding a comment.
Lol, spent like 10 min trying to get his result and then eventually googled it to find out I had the correct result the whole time. At least the correct version was used in the code.
There is something i'm not understanding, when its time to change the weights, you're supposed to multiply the input with the adjustment and add it to the weights right? doesn't that mean if the input is 0 then the weights wont change at all? i noticed this when i tried different inputs and outputs, your example works fine but when i tried {0,0,0},{0,1,0},{0,1,1},{0,0,1} as inputs and {0,0,0,0} for outputs it was a mess and no matter how many tests i did it couldnt figure out the correct answer
it does, this is a mistake in the code and can be fixed if you add a learning rate variable to multiply by the adjustments, rather than using the training inputs.
@@havoc3135 instead of dotproducting the (transposed) training inputs with the adjustments, multiply the adjustments by some scalar, so you can scale your adjustments manually. hope this helps
If Φ(x) = 1 / (1 + e^(-x)), then Φ'(x) = e^(-x) / (1 + e^(-x))^2, not x(1 - x). I'm curious about your Atom setup. Are the text overview on the side and the code suggestions hidden in Atom somewhere, or are they plugins?
This is kind of logistics regression if u go deep u may realize that it is lower the KL divergence in each iteration however u can only classify 2 types of class in this example u may try sofmax also and u may save and run it on google colab now no need to install python by yourself
In the next video we’re going to be making a blockchain in JavaScript, so subscribe if you’re interested in that stuff!
great video so made everything so easy
Dow stupid schools blocked pip and zip archives so I can't install numpy
Which compilar did you use?
in which software r u coding??
Polycode
Can the neurons and inputs be placed together, like neurons with much built in data??
Also I need a very powerful neural network for several different purposes, speech, faceID and math solving problems, do you have something that you made that is open source that you can share with me??
"stay with me, it's gonna be ok"... dude, that's such a lovely sentiment. You were born to teach I think, with that ability to keep pupils onboard. Very good video my man, thank you so much..
My friend, your explanation in 15 minutes gave more clarity to me than hours of crash course tutorials online. So simple and well explained. Awesome stuff my man!
After watching hyper-advanced tensorflow/keras stock market prediction tutorials for a while, being completely lost, I stumbled on this.
I finally, after weeks of trying to learn NN and decades of practical programming experience, understand it.
The iterative backpedaling was what confused me with all of those other videos, but taken down to its most simple form, like in this video, I can now see that it's merely looking at what it got, what it was trying to get and make adjustments to the appropriate synapses based on that, then trying again.
It's not the maths that confused me, it's how the machine actually learned. And that was perfectly demonstrated in this video. Thank you!
Do you know where I can find these tutorials? It would be very helpful for me, thanks!
kindly feel free to share with us Who was the teacher who took you through the Previous Tutorials. However, This teacher is doing well. Credits 💪
B
@Isaiah _ Neural Network
I agree too. So many videos complicate and dance around simple mechanics. Knowing the flow of the engine and the simple concept of what is happening, the other videos might make more sense now that I can put it into context.
What the?....this is it, finally I found good tutorial
same lol Ive finally can actually flippin understand thank much
+1 sub
i can english.
I agree
ye someone finally explains what it is XD
same!
“Stay with me, it’s gonna be okay” that makes me feel like I’m actually learning something and not just being told something
(I know I’m late but) Literally came to the comment section about this 😂
"stay with me it's gonna be okay"
TypeError: '
@@wirly- Loll
@@wirly- hahaha
This tutorial is a perfect blend of talking/programming and slides. Its also quick and to the point 8)
I watched a lot of videos about Machine Learning because I wanted to unterstand how that works. Non of these Videos explained so good like yours how a neuron and the adjustment actually works. Good work, now I finally understood it.
Bro, it was much easier then I thought. Thx for explaining.
What a fantastic way of explaining it. Whilst this is obviously not immediately useful, It's a sort of toy approach that gives you a building block to understand the greater scope.
Line no 16 : synaptic_weights=2 * np.random.random((3,1))-1
this line makes an array of 3X1 or a matrix of size 3X1. I did not understand this line before I tried this line separately.
This makes an easy grasp of the random concept, but as I learned in Soft Computing in my Btech, you can directly initialize the weights as 1, which will then get adjusted during training.
you can also replace the line with it : synaptic weights=np.array([[1,1,1]]).T
THANKS TO YOU for making this short and easy tutorial!
Hey can you tell me why are we multiplying 2 and subtracting 1?
@@Retriiiiiwhere??
@@nocopyrightgameplaystockvi231
2 * np.random.random((3,1)) -1
^ ^
I joined my university 2 months late, absolutely had no idea how to learn the lost neural network project topic and then I saw your video !!! Thanks a lot dude !!! For saving my semester HAHAHA
meaaaww hahaha nice, share it to any of your buddies if you think they need it ;-)
@@JonasBostoen Oh yes already did that,,, right now you have blessings of many helpless students LOL
Finally, a clear, straightforward tutorial to code along. GREAT JOB!
Most useful video on the internet for a total beginner, for anyone new to AI. Thanks.
Just a note on sigmoid_derivative, for myself as much as anyone else. Since you're inputting the output of sigmoid to sigmoid_derivative, he's using that sigmoid satisfyies the differential equation
y'(x) = y * (1 - y)
so we can compute the derivative sigmoid'(x) by inputing sigmoid(x) into [y --> y(1-y)]. That's very clever!
But you should run the outputs through the sigmoid derivative, right? And the outputs are sigmoided by default, so shouldn't you use the sigmoid twice?
Amazing video, too few sources do the absolute basics. however, can you please crank your volume up!
At the 10 minute mark and I just wanted to say that your explanations are clicking left and right with me thank you!!!!
The best one who can give you the right explanation of creating of a neural network from scratch.
Wow... The perfect tutorial.. I have been searching in the internet for a tutorial on how to make neural networks from scratch .
now I got it.. this is soo cool...
Very detail explanation...
This is what I'm looking for, on how to train your datasets by adjusting weights. Thank you so much!
in output after training : you can use this, and this will round off the decimal as a round off value - print(np.round(outputs,1))
Coding starts at 2:30
Polycode ping your comment so others will see it!
@@ChillGuyUA-cam maybe his firewall blocks icmp packets
@@du42bz I read that as "pimp packets"
At last... the video that doesn't just explain stuff but, but actually tells you what to do too!
Thank you very much. I constantly see these videos about the theory of Machine Learning and AI but I have never found an in-depth start from scratch tutorial with mo libraries, all while explaining everything. Thank you!
Thanks for the video,
I try to follow this but I see the solution can be other way in binary logic,
the first column is multiplied by the sum of the two other columns,
not only first column is what decides the output but the others also as bellow.
if we take this table at 0:20
Example 1: 0x(0+1)=0
Example 2: 1x(1+1)=1
Example 3: 1x(0+1)=1
Example 4: 0x(1+1)=0
New situation: 1x(0+0)=0
This is by far the best explanation. I guess by keeping the complexity level of chosen example pretty low, you landed the message perfectly, thanks !!
2 minutes in and I already have a better understanding than 2 semesters worth of lectures
So far the best simplest and practical tutorial I got. U cleared all my doubt and little background in python helped me lot.
Superb! Using the seeded weights so that you and the viewer get the same results was a brilliant touch. Helps the viewer know if he miscoded or not. Thanks.
I have been looking for a toy example of Neural Networks, thanks to your video I get to see one. Your video is very concise. Thank you. Also, thank you for sharing your Python code.
Thank you for very useful insigth into what is behind the neural network. At 10:00: (the derivative of a sigmoid function)=(sigmoid funcion)*(1-sigmoid function) and not x(1-x)
Best tutorial on neural networks i have seen till now....thanks buddy😘
Thanks a lot. This is much more comprehensible than all I have watched and read
I watched lot of Ann videos on UA-cam, and all of them missing something which I am not getting
But thanks to you I got what I need. Especially explaining the working. Thank u again
Akmal Eache thanks man
finally a properly structured tutorial
Nice profile pic 😂
thx for the totorial gived the neural network my own training data and it worked geat!
At 0:40 ,
The output depends on both first and last input not only on first. If i label the inputs a,b,c from left to right respectively, then according to the 4 states truth function, the output is
O= abc + ab'c
=ac(b+b')
=ac.
So nn output for 100 input should be 0.
It helps to have someone who actually knows how to break a "problem" down to its bare essentials. Excellent work.
Excellent picture
Hey dude just saw this video from your post on /r/programming - This video is awesome! You're great at explaining everything. Neural nets can sometimes be confusing but this makes a lot of sense to me. Thanks so much!!
Man, this was so to the point! Thanks for your efforts. Best NN basics tutorial I've found so far! Very very useful!
waiting for the next video, this type of explanation really helps
I've uploaded it!
This is such s good tutorial!!! I finally understand how these things are actually coded!
Lots of people can code only few can teach.. well done
Just a note: sigmoid_derivative is based on the exact analytical formula for the sigmoid derivative.
thanks so much for this, I was really confused during that bit!
@@sonic597s dont get it. He still uses x(1-x) which has nothing to do with sigmoid, but it is just an approximation to the shape of the curve (signs are opposit)
@@pluronic123 a derivative finds the slope of the line at some given point. the sigmoid derivative being the formula x(1-x) (where x is the sigmoid fn.) means that if you were to plug in some sigmoid function given some value (z) as x, you would get the slope of the sigmoid fn at that value (z)
@@sonic597s thanks precious internet dude
Thanks for great video!
Possible code to find output for [1,0,0] :
p_in=np.array([1,0,0])
p_out=sigmoid(np.dot(p_in, synaptic_weights))
print("Predicted Output After Training:")
print(np.round(p_out))
=>
Predicted Output After Training:
[1.]
Before I slightly understood how neural networks work, now I understand how they work slightly better than before.
Dude this video was really helpful! Thank you for explaining the basics of neural networks! :D
1:39
"so we need a little meth"
I think we all do
Adderall is good for that
It's math LoL 😛
Nice work! Finally found someone that can teach the way I can understand it..
I subscribed and look forward to watching all your videos!
NICE!!!!! Finally, I can understand what is NN and backpropagation. Simple and Easy to understand. Thank a lot to Polycode :)
This was a such a great tutorial. Very clear, concise and well paced.
Wow, I’ve been looking for a tutorial just like this for a long time! Subscribed! Please keep making videos!!
15 minute video... takes me 2 hours to get through XD
This video has taught me more than anything about ANN.
Simple, Clear and straight to the point. Great Job!!!
You did a great job, you should make more videos. May be explaining how to make a more complex neural network.
Output = array[1[1]].value
Lol just kidding. This was a great video and I understood a ton
This is the tutorial actually I'm searching for understanding of Neural network... Thanks a lot...
Completely new to this and you made it very easy to understand. Thank you and good job!
Great tutorial, better than the usual,"Just use this library...."
So i tweeked training outputs to 1,1,1,0 with an interation in range of 100,000 and the computer gave me a perfect answer to the third output of 1. The other outputs where close to true answers but i didn't think the computer could give a 100% true answer. I guess im confused that it didn't take that many training loops to give that answer.
Btw great video finally got me to get the computer out and start!
what a excellent explanation of complex subject! Please keep up the videos.
Wonder full video, this will definitely turn upside down of my project. Thank You so much!!! :)
Thanks this was so helpful it really cleared up a lot of my questions about the topics other videos said let’s not talk about that yet..., thanks again these videos are super helpful keep up the amazing work
Great tutorial, but I might have used a different approximation for d-sigmoid. I'm not sure where you got x(1-x) from as an approximation- it does not share a derivative with d-sigmoid and the vertex is off in space. I'm not sure if it is a standard to use and I'm just misunderstanding (I'm watching this tutorial to learn, after all), but I did a quick Taylor polynomial approximation and got the function:
d-sigmoid ~= (2 - x^2) / 8 -------This won't work very well for things not centered at x = 0
This is about the same in terms of typing effort and computer processing, but a little more accurate. It is also based around x = 0 so it won't be biased towards one outcome (unless you built a weight into your function, in which case it makes a lot of sense).
You can continue on to the 4th derivative in the series and add a third term which doesn't factor as nice but is extremely accurate (+/- 0.001) on the domain -1
0:38 the rule could also be that that first and third outputs have to be 1, and not just the 1st output.
Precisely, i thought the same too
FINALLY!!.... I have been looking for such tutorial which teaches from scratch... That's Very good of you to do so... Keep it up bro.. Make more videos like this... BTW I am new to your channel. Just subscribed
Excellent Explanation making things crisp and clear
THANKS FOR VERY SIMPLE WAY TO EXPLAIN... FINALLY UNDERSTOOD.
The best neural network hands on
You are my hero! My prof is so bad explaning exact the same things over I guess 4 or 5 lesson of 3 hours each. And you just need some minutes ... haha I subscirbed you immediately. I need more of it!
What a great video! Keep up with the good work, thanks for sharing your knowledge
Incredible! I think this is the first video that has helped me understand the formulas behind a neural network! However, I was wondering how you implement the calculation of biases into the actual code and Backpropagation steps and formula?
I changed the sigmoid derivative function to this and got better results in less tries and this is the actual derivative of a sigmoid function:
def sigmoiddeivative(x):
return np.exp(-x) / ( pow( ( 1 + np.exp(-x) ), 2) )
This is indeed a better derivative, good job! For the purposes of simplicity though I have kept the less complicated function since it's almost the same shape. Yours is better though.
@@JonasBostoen thanks but I didn't understand the use of the 2 * random.random(3,1) in the beginning of the class initialisation
Your video is a life saver, thanks! Hope you make more such videos!
this is on of the best yet simple explanation. keep up
This video deserves an award
This is the thing that finally helped me understand! Never stop doing the grade vids!
wow couldnt be better explained, keep the good job.
there are not many sources for newbs machine learners, specially with no libraries !!
i need more, thats awesome
Nice presentation. Made it feel very simple
Hai, thank you, this is very easy to catch for newbie like me. Simple and clear. Keep going 👍
first time i understood back propagation from your video.
For this simple problem backpropagation is not needed. The gradient formula can be computed analytically and would reduce the training iterations a lot. (I achieved high confidence with 500 iterations only)
That is best empirical lesson on basic NN
you are so wonderful , i quite understand by you basic and easy to learn method, thanks
In line 16, why have you multiplied the random weights by 2 and then subtracted 1 ? Great video .. very helpful .. Thank you very much.
np.random.random returns floating point values between 0 and 1, but since we need values between -1 and 1, this is the way to do it.
@@JonasBostoen thank you for this clarification. i was lost at this line but luckily stumbled to this comment. thank you very much! cheers!
@@JonasBostoen I'm very late to the party, but since we need a random number between -1 and 1, wouldn't it be better to add two random numbers, then substract 1, or does it matter?
The derivative of the sigmoid is: σ'(x) = σ(x)(1 - σ(x)). It took me a while to understand what you meant at 9:33, maybe you should consider adding a comment.
The derivative of sigmoid function is: \phi*(1-\phi). x*(1-x) is wrong
Lol, spent like 10 min trying to get his result and then eventually googled it to find out I had the correct result the whole time. At least the correct version was used in the code.
yet somehow it gives the incorrect result when using the correct derivative. Something else is missing here.
Thanks so much! After days of looking, found a great tutorial and can expand my knowledge!!!
bruh
Thanks for thinking about the equality of random weight for us.
There is something i'm not understanding, when its time to change the weights, you're supposed to multiply the input with the adjustment and add it to the weights right? doesn't that mean if the input is 0 then the weights wont change at all? i noticed this when i tried different inputs and outputs, your example works fine but when i tried {0,0,0},{0,1,0},{0,1,1},{0,0,1} as inputs and {0,0,0,0} for outputs it was a mess and no matter how many tests i did it couldnt figure out the correct answer
it does, this is a mistake in the code and can be fixed if you add a learning rate variable to multiply by the adjustments, rather than using the training inputs.
@@havoc3135 instead of dotproducting the (transposed) training inputs with the adjustments, multiply the adjustments by some scalar, so you can scale your adjustments manually. hope this helps
derivative of pi(x) is pi(x)*(1-pi(x)) and not x(1-x) at 10:00
Amazing tutorial, keep up the good work
Finally a video I can understand! Thank you
If Φ(x) = 1 / (1 + e^(-x)), then Φ'(x) = e^(-x) / (1 + e^(-x))^2, not x(1 - x).
I'm curious about your Atom setup. Are the text overview on the side and the code suggestions hidden in Atom somewhere, or are they plugins?
huh
Thank you so much. This tutorial is direct, clear and instructive. 1 more inscribed.
This video is 100% gold, thank you !
This is kind of logistics regression if u go deep u may realize that it is lower the KL divergence in each iteration
however u can only classify 2 types of class in this example u may try sofmax also
and u may save and run it on google colab now no need to install python by yourself