This is exactly how I like to learn. Only a few people explain as clear as you. My professor just shows us math stuff that seem scary. Most professors are like that (screw them). Thank you so much!
Oh man, I know what you mean. I personally re-recorded this video at least 4 times in effort to make it simpler and simpler. Still not where I want it to be yet. The simplest description I have doesn't explain the "how", but more of the "what" and it's this: "Just like you teach a baby a dog and a cat, you show them a picture of a dog and say 'dog', then show them a picture of a cat and say 'cat', and do that over and over until they start to know the difference...that's machine learning - you show the computer some data and tell it what the outcome is, then show it different data and tell the outcome...over and over thousands of times until the computer tries all sorts of adjustments and can eventually sort out how the data affects the outcome."
This is a great tutorial for those who have some background in what a neural network is, have some knowledge of the technical terms used and are aware of the overall objective of a neural network. For absolute beginners, this may not be the best place to start. Perhaps you can make a tutorial on what a neural network does. Like how a marketing guy would pitch it to a non technical client?
tysm!! ive been looking at other tutorials and started to feel discouraged because i did'ntt understand the machine learning lingo and math behind it. This simple explaination is a lifesaver !
I have read into this and watched several videos to help explain this really intricate topic. However, the only part I still really struggle in is, “Why do the equations work?” Anyone can copy an equation, but to make something of your own, you have to understand it.
A question. We give weights and biases to only first hidden layer or for all layers ? EDIT: After watching next video in series I understood that weights/biases are given to all nodes in all hidden layers initially. best explanation ever on this topic.
THANK YOU! After years (on and off!) of trying to wrap my head around this algorithm, it finally is "intuitive" enough for me to move to the next level and actually try and code one!
This is a great video! Neural networks is easier than I think, the most frustrating part is the maths but you probably don't need it as all ML frameworks like Tensorflow has done the complex maths for you.
These are my first days of learning about machine learning and AI myself. It's "a little" confusing, but hopefully I'll get better and things will get easier in time. Especially when English is not my native language. I wish good luck to yall trying to figure out with these things, you are not alone.
actuallly all the nodes are individual perceptrons with inputs weights and biases. the activation of the output is just pushing buttons instead of feed forward. the network is learning by trial and error by mainy shifting the weight toward bolean yes or no like +1 or -1. the biased is just a way to overcome adding 0 with 0. the back propagation is just a attempt to speed up the result as you could just randomize the weigt over and over and try sum again and again until you get lucky. anyway in a game its the game rules itself that is supervising the network. technically a single perceptron can solve a whole stream of data point but since only one node have very little memory, a network is needed to hold the memory of the game as the neural net is just a array of variables at the core. its the wiring of these connections create a pattern from a to b that reflect where you are in the game. i mean there is a cronological order to the arangement of data in it that reflect the game itself. all the numbers in the network become similar to each other and are fractal like by default.
Would you have to have a "black box" or could you just have were the outputs and input are directly connected without a weight? Is there a way to train the bot programming without a weight?
What are you adding some slight reverb to your vocal track? The acoustics sound great like you're on stage, not in a bedroom in front of a computer lol.
Great video! Thanks. I just spotted one mistake at 03:55 - you say "We've chosen three hidden layers." Do you mean "We've chosen three nodes in our hidden layer"?
excellent video... but you came out short with the explanation of the calcs of backprop... no idea what delta is.... no idea what pastChange is... and you don't have to change just the values and the weight... you have to change the biases... so............. i hope this is in the next video... really good so far
Why does every neuron have a different bias? Isn't the bias common for all neurons of a single layer? Or is this a different type on NN than the one explained by 3Blue1Brown?
I'm still a beginner, but I dont think you need to know (at least now) why it works just like most people don't know why x = (-b+-sqrt(b^2(4)(a)(a)))/2a works (quadratic formula).
now this is great! this is all I need, thanks. I think you should change the title from "Neural Networks Explained Pt 2 - Machine Learning Tutorial for Beginners" to "Neural Networks Explained - Machine Learning Tutorial for Beginners part 2" so that it shows in my youtube recommendation section.
the bias value is common for each hidden layer. its not like for each neuron in the hidden layer will have different bias value ..... Pl correct me if am wwrong
At time 1:04 you said, "if you have seen my last 2 videos on Machine learning", can you supply their links here? I have not found them in your channel archive. Thanks
If you really want to master this material program it only using numpy. Keras and other higher lever libraries black box what's going on. I'm on day 3 and I just got to programming back propagation but so far this has had the biggest effect on my understanding. I tried to follow many keras tutorial and have no idea what's going on besides on how to implement it.
🤯 OH... W.O.W!! THANK YOU Sooo MUCH!! I knew it was simple. But you just made it soooo easy for me to understand! I was already like🤪 , "what!!? Ok, I guess I have no other choice but to go over the code of a neural network myself because all explanations are just... Garbage!" People try so often to sound smart when they explain things they forget that the smart teacher is the one who actually makes people understand. I salute you! This has definitely made me want to code one myself.
I came to this video first. Watched it twice. Was still confused. Went on to watch 5 different other vids on NN’s and 2 vids on CNN’s. Revisited this video. Now it makes sense
So far I understand, I just don't understand why you would name one of the input variables Weight since the connections are also called weight(s), could confuse people.
Ok, I watched your video, I think I may have now some idea of what NN's are... but I"m not sure. Give me more examples! (yes, I personally have a very slow learning rate) P.S. I'm just kidding) Thank you for the great tutorial)
there is something i want to know, after that the learning is complete and we have an error lvl that is very low, lets say 1 in a million and there pops an error on the output how do we include that bias and we have already stopped learning, what i am saying might not be clear but what if the error is crucial and so far it is seeming to be impossible to have an error 0 lvl and the data that is in play is endless do we have to compare the human error rate to our network and move on or what do we do next?. i am starting to learn neural network and i want an answer if it is possible. also if we implement this on a real case that is out there like a self flying jet that has an endless play of nodes, is the simulation of this case or any other considered to be credible in terrms of laws and ect
Hi! We are researchers in human-computer interaction (HCI) looking for people who have taken an initiative to recently learn Machine Learning on their own, for career, course or curiosity. It seems you are in that place currently. Would you mind telling us here (www.surveymonkey.ca/r/SelfLearning_ML) about your experiences and any difficulties you faced while self-teaching ML and how you overcame them. There is also a chance to win $50 giftcard. You can help this project by taking out 5-10 minutes to participate in our study. For more details, see here: www.surveymonkey.ca/r/SelfLearning_ML Please share this request with your colleagues or friends who fit this description. People from any major/background may participate. The survey will be open until July 23, 2020.
Harsh Raj Always free I am not completely sure on this, but I think they are just a few randomly generated numbers less than one used to alter the first numbers in big ways. That way, if the answer is too low from the correct answer, you can change one of the biases or weights to make the end result more correct. And their use after training is so that they can be copied and pasted instantly instead of retraining the machine every time.
Thanks for the tutorial. Good. I have another question. Once the first animal's data done, the weights are changes by back propagation. But when the second animal data is passes, do the weights restored? Not clear from the tutorial. Thanks.
Activation functions are non-linear equations which combine non-linearities with each of the calculated node values in order to create a non-linear output. Why care about non-linear outputs? Well, because neural networks were designed to solve non-linear problems.
@@arandomguy46 nor does artificial neural networks. We just model them with mathematical functions. We've predicted the motion of subatomic particles with mathematical functions so what's stopping us from modelling the brain in the same way?
too number-filled explanation. The graphs/activation stuff you showed didn't actually help with the explanation. Why would I use sigmoid or tanh...? Felt like you skipped over backprop with bare mentions. I think your explanation made sense, it just hard to visualize for a beginner. Maybe you need code.
I also feel like that for almost any tutorial out there that "explains" backpropagation They just skip the presentation of the math "Oh yeah thats how we feed forward on the network... Yadda yadda yadda yadda... *Wastes 30 mins talking about something easy*... And that backpropagation... *Shows a few arrows pointing backwards the network for 5 seconds*... Thats it"
It took me about 10+ exposures to neural network explanations to "get" how they work, so I'd say watch this video ua-cam.com/video/9Hz3P1VgLz4/v-deo.html to understand how you USE one...then watch this video again, and pause as much as needed at each step. Also helpful, may be to watch other youtuber's videos, although, I did find most explanations out there very confusing and mathematically intense when learning.
@@xxsamperrinxx3993 change "formula" is for neuron's (or Synapse)Weight... sorry for the previous response in backprop.... think that you have to change - weights - biases - and.. you have to calculate delta(which is the most difficult to find info of, because lazy people don't want to explain the cycle of derivatives)
@@xxsamperrinxx3993 believe value refers to the actual value from the output. 1 if it's a whatsit, 0 if not, for example. for the whatsit output node if you have a prediction of 0.35, delta would be 0.65 and value would be 1. for the whosit output node if you have a prediction of 0.96, your delta would be 0.96 and the actual value 0
This is exactly how I like to learn. Only a few people explain as clear as you. My professor just shows us math stuff that seem scary. Most professors are like that (screw them). Thank you so much!
I hope some day they invent a neural network that is able to explain in a simple way what neural networks are. We're obviously not there yet.
Oh man, I know what you mean. I personally re-recorded this video at least 4 times in effort to make it simpler and simpler. Still not where I want it to be yet. The simplest description I have doesn't explain the "how", but more of the "what" and it's this:
"Just like you teach a baby a dog and a cat, you show them a picture of a dog and say 'dog', then show them a picture of a cat and say 'cat', and do that over and over until they start to know the difference...that's machine learning - you show the computer some data and tell it what the outcome is, then show it different data and tell the outcome...over and over thousands of times until the computer tries all sorts of adjustments and can eventually sort out how the data affects the outcome."
ua-cam.com/play/PLxt59R_fWVzT9bDxA76AHm3ig0Gg9S3So.html
Hey Alex. I recommend this article for something that is quick and easy to understand: youcodetoo.com/2019/07/16/what-is-machine-learning/
Here you go: ua-cam.com/video/r1U6fenGTrU/v-deo.html
Monads, too
This is a great tutorial for those who have some background in what a neural network is, have some knowledge of the technical terms used and are aware of the overall objective of a neural network.
For absolute beginners, this may not be the best place to start. Perhaps you can make a tutorial on what a neural network does. Like how a marketing guy would pitch it to a non technical client?
tysm!! ive been looking at other tutorials and started to feel discouraged because i did'ntt understand the machine learning lingo and math behind it. This simple explaination is a lifesaver !
Very math-free explanation and it is very helpful to pick up the basics. Nice job!
This is awesome. Finally a simple yet coherent explanation.
Best explanation I’ve seen. Thank you
I have read into this and watched several videos to help explain this really intricate topic. However, the only part I still really struggle in is, “Why do the equations work?”
Anyone can copy an equation, but to make something of your own, you have to understand it.
Very true
Thank you for the explanation! Finally something i can easily understand
This is what I really needed before dive dig into deeper! Thank you so much for such a nice explanation !!!
A question. We give weights and biases to only first hidden layer or for all layers ?
EDIT: After watching next video in series I understood that weights/biases are given to all nodes in all hidden layers initially. best explanation ever on this topic.
THANK YOU! After years (on and off!) of trying to wrap my head around this algorithm, it finally is "intuitive" enough for me to move to the next level and actually try and code one!
Wow what u explained total nailed it! I been so confused of what neural network is and was only know its something u put in and it gives a output
Oh my God, this is the most helpful video in 2021
Finally the video I was searching for , u r tooo good.
This is a great video! Neural networks is easier than I think, the most frustrating part is the maths but you probably don't need it as all ML frameworks like Tensorflow has done the complex maths for you.
Yes btw we still need to understand maths behind them.Thats is an advantages for us to understand more about our NN
Probably the best tutorial on ML on the internet.
beautifully explained! I've never come across such a simple and understandable explanation of neural networks!
Omg, you actually made me giggle with your explanations. I little bit confusing especially about back propagation. But, still very useful, thank you!
the best explanation of neural network, thanks!
Thank You sir. Really appreciate the effort. I hunted for a good explanation and finally i ended up with this gr8 explanation. Couldn't be happier.
Ok. So after you apply the activation function to the sum. What then do you do to it? The arrows on the right arnt explained.
These are my first days of learning about machine learning and AI myself. It's "a little" confusing, but hopefully I'll get better and things will get easier in time. Especially when English is not my native language. I wish good luck to yall trying to figure out with these things, you are not alone.
your video was very useful to clarify all my information about NN, thanks dude
After the activation function, do you multiply it the same way as before
actuallly all the nodes are individual perceptrons with inputs weights and biases. the activation of the output is just pushing buttons instead of feed forward. the network is learning by trial and error by mainy shifting the weight toward bolean yes or no like +1 or -1. the biased is just a way to overcome adding 0 with 0. the back propagation is just a attempt to speed up the result as you could just randomize the weigt over and over and try sum again and again until you get lucky. anyway in a game its the game rules itself that is supervising the network. technically a single perceptron can solve a whole stream of data point but since only one node have very little memory, a network is needed to hold the memory of the game as the neural net is just a array of variables at the core. its the wiring of these connections create a pattern from a to b that reflect where you are in the game. i mean there is a cronological order to the arangement of data in it that reflect the game itself. all the numbers in the network become similar to each other and are fractal like by default.
Would you have to have a "black box" or could you just have were the outputs and input are directly connected without a weight?
Is there a way to train the bot programming without a weight?
Great overview and good vibes!
What are you adding some slight reverb to your vocal track? The acoustics sound great like you're on stage, not in a bedroom in front of a computer lol.
where do you apply the calculated change? It didn't affect the weights and biases at all in the diagram.
That's the new weight for the next iteration.
And who tells if the outcome is correct or not and how?
Thank you, this is helpful.
During back propagation, are all the weights and biases adjusted at the same time, of is it one at time, recalc; then the next one, etc. ?
Ayeee, thank you for scrunching all this heavy content information in a way that makes sense. Props to you!
*NO ANIME*
Great video! Thanks. I just spotted one mistake at 03:55 - you say "We've chosen three hidden layers." Do you mean "We've chosen three nodes in our hidden layer"?
Great video! Thanks for sharing!
excellent video... but you came out short with the explanation of the calcs of backprop...
no idea what delta is.... no idea what pastChange is... and you don't have to change just the values and the weight... you have to change the biases... so............. i hope this is in the next video...
really good so far
ok we multiplied weights and added bias... what do we do to that? do we make it next weight of connection? or what do we do
Great explanation thank you
This is actually really cool, its a bit difficult at first but once you get it you will like math X10 more
Why does every neuron have a different bias? Isn't the bias common for all neurons of a single layer? Or is this a different type on NN than the one explained by 3Blue1Brown?
Explanation gets the general ideal across... but yeah, the black box remains just that.. a black box.
Great intro thanks!
Ok, I’ll learn ML :D
That is amazing how interesteting and simple your explanations are for such complicated topic.
Thanks! I actually took a month or two to try to get the explanation down...Neural Networks are simple, but complex to explain.
yeah... very simple
Thanks for the equations by the way, huge help!
I'm still a beginner, but I dont think you need to know (at least now) why it works just like most people don't know why x = (-b+-sqrt(b^2(4)(a)(a)))/2a works (quadratic formula).
I think maybe you could give more detailed calculations and visualize it from each iterations so that viewers can understand how the calculation works
I do that in the next video with an actual neural network and actual console logs of each calculation. ua-cam.com/video/9ZsyQZouOQ8/v-deo.html
now this is great! this is all I need, thanks. I think you should change the title from "Neural Networks Explained Pt 2 - Machine Learning Tutorial for Beginners" to "Neural Networks Explained - Machine Learning Tutorial for Beginners part 2" so that it shows in my youtube recommendation section.
the bias value is common for each hidden layer. its not like for each neuron in the hidden layer will have different bias value ..... Pl correct me if am wwrong
At time 1:04 you said, "if you have seen my last 2 videos on Machine learning", can you supply their links here? I have not found them in your channel archive. Thanks
Were those numbers from the first calculation correct? I got different numbers than .3512 and .7891
Should the input weights be converted to a value between 0-1 first?
This is amazing
Maybe you should explain what a perceptron is and why it need weights, what is Bias , than go deeper.
If you really want to master this material program it only using numpy. Keras and other higher lever libraries black box what's going on. I'm on day 3 and I just got to programming back propagation but so far this has had the biggest effect on my understanding. I tried to follow many keras tutorial and have no idea what's going on besides on how to implement it.
Thanks ... it was very useful
🤯 OH... W.O.W!! THANK YOU Sooo MUCH!!
I knew it was simple. But you just made it soooo easy for me to understand!
I was already like🤪 , "what!!? Ok, I guess I have no other choice but to go over the code of a neural network myself because all explanations are just... Garbage!"
People try so often to sound smart when they explain things they forget that the smart teacher is the one who actually makes people understand.
I salute you!
This has definitely made me want to code one myself.
Plz more video in machine learning with JavaScript
bruh what would the video be about..
tutorial on how to make a network in JS?
1. just use a library
2. or use a library
3. maybe use a library
Thank you!
Mouse used can be bigger and contrast to the screen. So what happens is something not able to follow the mouse when you explain. Thanks
Does the output needs to loop back to the input for feed back? feels like this is required for learning in general. Your diagram doesn’t show this.
I came to this video first. Watched it twice. Was still confused. Went on to watch 5 different other vids on NN’s and 2 vids on CNN’s. Revisited this video. Now it makes sense
share those video links that you watched with us so we can all be on the same page.
hey, what is the sort of these things? like machine learning, ai, deep learning etc. where should i start? could someone write basis to advanced?
Very good
So far I understand, I just don't understand why you would name one of the input variables Weight since the connections are also called weight(s), could confuse people.
Ha true, good point. It should have been lb, oz, or kg. I can see how that would be confusing
This is amazing!
The weight here is referred to the weight of the animal or the weight of the relations
I keep finding how they work, but not how to actually use them. If I want a certain output, how do I set up my inputs?
So how to correct the actual weights. That is the most interesting question. The rest in the video is easy.
At 10:30, what is delta? He says it's the "difference", but the difference of what?
crystal clear explanation. thanks
Ok, I watched your video, I think I may have now some idea of what NN's are... but I"m not sure. Give me more examples!
(yes, I personally have a very slow learning rate)
P.S. I'm just kidding) Thank you for the great tutorial)
Hello, everybody. I'm Thanapol from Chulalongkhon University ,Thailand
Thanks a lot...
there is something i want to know, after that the learning is complete and we have an error lvl that is very low, lets say 1 in a million and there pops an error on the output how do we include that bias and we have already stopped learning, what i am saying might not be clear but what if the error is crucial and so far it is seeming to be impossible to have an error 0 lvl and the data that is in play is endless do we have to compare the human error rate to our network and move on or what do we do next?. i am starting to learn neural network and i want an answer if it is possible. also if we implement this on a real case that is out there like a self flying jet that has an endless play of nodes, is the simulation of this case or any other considered to be credible in terrms of laws and ect
Hi!
We are researchers in human-computer interaction (HCI) looking for people who have taken an
initiative to recently learn Machine Learning on their own, for career, course or curiosity. It seems you are in that place currently. Would you mind telling us here (www.surveymonkey.ca/r/SelfLearning_ML) about your experiences and any difficulties you faced while self-teaching ML and how you overcame them. There is also a chance to win $50 giftcard.
You can help this project by taking out 5-10 minutes to participate in our study.
For more details, see here: www.surveymonkey.ca/r/SelfLearning_ML
Please share this request with your colleagues or friends who fit this description. People from any major/background may participate. The survey will be open until July 23, 2020.
4:35 bros intrusive thoughts started kicking in 💀☠
literally everything else bounced off my brain
Well that escalated quickly
you didnt explained what are these weight and biases , and what are their use :(
Harsh Raj Always free I am not completely sure on this, but I think they are just a few randomly generated numbers less than one used to alter the first numbers in big ways.
That way, if the answer is too low from the correct answer, you can change one of the biases or weights to make the end result more correct.
And their use after training is so that they can be copied and pasted instantly instead of retraining the machine every time.
Crystal clear
Another issue with lower learning rates is local minima
Thanks for the tutorial. Good. I have another question. Once the first animal's data done, the weights are changes by back propagation. But when the second animal data is passes, do the weights restored? Not clear from the tutorial. Thanks.
Been watchin many videos but this is damn good !
i don't get activation functions. how does that work??
Activation functions are non-linear equations which combine non-linearities with each of the calculated node values in order to create a non-linear output.
Why care about non-linear outputs? Well, because neural networks were designed to solve non-linear problems.
once I built a trained NN with my sample data, how can I use it with new inputs to get the outputs?
thanks
Just remove the improving code and add in the inputs to get the answer
I am learning how to learn to learn machine learning.
is this easy or i am dumb enough to not understand that . bro i really wanted to know the concept of neural network and how to apply
what a video you have explained beautifully well done
thank youuuuuuuu
thanks
I wonder if we will ever be able to find the activation function and the hidden layer structure which our brains use to build better neural networks.
our brain doesn't use math to make decisions
@@arandomguy46 nor does artificial neural networks. We just model them with mathematical functions. We've predicted the motion of subatomic particles with mathematical functions so what's stopping us from modelling the brain in the same way?
Can you suggest me some basic recommend knowledge?
@John Doe thanks for your suggestion
@John Doe thanks again
No clear explanation of the activation function and which one is chosen.
too number-filled explanation. The graphs/activation stuff you showed didn't actually help with the explanation. Why would I use sigmoid or tanh...? Felt like you skipped over backprop with bare mentions. I think your explanation made sense, it just hard to visualize for a beginner. Maybe you need code.
I also feel like that for almost any tutorial out there that "explains" backpropagation
They just skip the presentation of the math
"Oh yeah thats how we feed forward on the network... Yadda yadda yadda yadda... *Wastes 30 mins talking about something easy*... And that backpropagation... *Shows a few arrows pointing backwards the network for 5 seconds*... Thats it"
Him: it's not super complex
Me: you mean it's not 100% complex
Him: YESS!
Me: SOoo the limit is 99.
Okay I feel like I can make one now, just as soon as I figure out how to sigmoid.
Great explanation but the whosit whatsit was a little confusing for me.
Forgot about the biases, that's why I was getting some random noise instead of image.... Thx
this didn't help... Perhaps I need to go back and re-watch that first Neual Nnetwork vidoe again?
It took me about 10+ exposures to neural network explanations to "get" how they work, so I'd say watch this video ua-cam.com/video/9Hz3P1VgLz4/v-deo.html to understand how you USE one...then watch this video again, and pause as much as needed at each step. Also helpful, may be to watch other youtuber's videos, although, I did find most explanations out there very confusing and mathematically intense when learning.
Better go here ua-cam.com/users/giantneuralnetwork
This video is poorly scripted,
Seems simple enough. Now we just need n neural networks trained to identify and respond to different stimuli and we can retire as a species!
What is "whosit" and "whatsit"?
I think its an american slang for option a or option b as simple as that.
Hey, when r u gonna upload the last part for vue and nodejs deployement?
Hey what does the * value refer to in your change formula
look at your keyboard.... it's multiplication
@@FerMJy im talking about what 'value' is referring to but nice try
@@xxsamperrinxx3993 change "formula" is for neuron's (or Synapse)Weight... sorry for the previous response
in backprop.... think that you have to change
- weights
- biases
- and.. you have to calculate delta(which is the most difficult to find info of, because lazy people don't want to explain the cycle of derivatives)
@@FerMJy no like literally what does VALUE mean
@@xxsamperrinxx3993 believe value refers to the actual value from the output. 1 if it's a whatsit, 0 if not, for example. for the whatsit output node if you have a prediction of 0.35, delta would be 0.65 and value would be 1. for the whosit output node if you have a prediction of 0.96, your delta would be 0.96 and the actual value 0
thx
I have just realized, that since ours brains are neuronal networks, this video trains neuronal networks how to train neuronal networks.