When you were cycling through the inputs to update the weights, only third inputs were predicted correctly, will the algorithm come back to the inputs that it couldn't predict correctly? If yes then at what stage?
Hey Jacob, I'm sorry if i got this wrong but shouldn't the group of points on the top be getting the value 0 instead of 1 and the group below get 1 instead of 0 (At about 12:40)? But i guess you corrected it later.
Consider the point (0,1000). This is clearly above the line. What value would it have? 0wx + 1000wy + b = 1000*0.5 = 500. a(500) = 1 because 500 is positive, so 1 is the correct classification for points on top. It is possible to set the weights and biases in such a way that flips where 0 and 1 are, but this example is correct.
In this particular example, one of the perceptron inputs is x, and the other is y. The reason we are trying to draw a line (hyperplane) in this space is that we want to have a way of categorizing all possible inputs. The perceptron assigns a class to each possible set of inputs based on which side of the line you end up on. This can be a little bit confusing, but it is even worse in the kinds of high-dimensional spaces where neural networks are typically applied.
I was so confused about the math and looking for solution all morning and now I found it and I understand clearly how it works now. Thanks a lot !
Thank you Jacob, no fancy presentation etc. Just brilliant explanation on the concept that i need for Machine Learning.
One of the best youtube videos on this topic. Nicely done.
the video I have been looking for! thank you very much!
Absolutely astonishing ! this is the first time i understand without skipping !
THIS IS JUST PHENOMENAL :) Thank you so much thats what i have been searching the whole day now i get it !!
Now I got how perceptron works. thank you!!!!
fluent explanation of comlex mathematical concepts - without missing out on the details .
Your videos have helped me on more than one occasion and for that I humbly thank you for your effort.
Nice presentation.
You are very smart and knowledgeable.
Please make more!!! Great Videos!
Is there a theorem which says the weights and biases will eventually make correct predictions for small alpha and linearly separable data?
can make a math course vectors from basic calculation up to this stuff i dont understand vectors and
the e3
Great explanation.
How do we update the bias in the last example?
When you were cycling through the inputs to update the weights, only third inputs were predicted correctly, will the algorithm come back to the inputs that it couldn't predict correctly? If yes then at what stage?
amazing
Great work!
Great Videoooo!!!!!
excellent explanation
thank you very much. This is very useful tutorial
How do we evaluate target, omegas and learning rate?
Thanks for this clear explanation
Simple and helpful
Why do we introduce a bias unit in the first place?
Best!
Thank you so much. This is extremely helpful!
In case of multi layer perceptron, we use the same formula: alpha*(t - p(i))
Hey Jacob, I'm sorry if i got this wrong but shouldn't the group of points on the top be getting the value 0 instead of 1 and the group below get 1 instead of 0 (At about 12:40)? But i guess you corrected it later.
Consider the point (0,1000). This is clearly above the line. What value would it have? 0wx + 1000wy + b = 1000*0.5 = 500. a(500) = 1 because 500 is positive, so 1 is the correct classification for points on top. It is possible to set the weights and biases in such a way that flips where 0 and 1 are, but this example is correct.
Why?
Very well explained. Thanks a lot!
Can someone explain me what the x and y axis represent concretely?
In this particular example, one of the perceptron inputs is x, and the other is y. The reason we are trying to draw a line (hyperplane) in this space is that we want to have a way of categorizing all possible inputs. The perceptron assigns a class to each possible set of inputs based on which side of the line you end up on. This can be a little bit confusing, but it is even worse in the kinds of high-dimensional spaces where neural networks are typically applied.
Thanks very helpful
Here by watching@ sakho kun
Loved it!
very helpful, thanks
Good one ;)
thank you alot
Thanks. Helpful.
you cant use a pure step function because you cant propagate the error backward through it!!!
The volume of the voice is damned low
i think its not clear what did you did from @22:00
alpha*(t - p(i)) = 0.1*1. w = (0,0,0) and i = (1,1,1), so w + alpha*(t - p(i))xi = (0,0,0) + 0.1*(1,1,1) = (0,0,0) + (0.1,0.1,0.1) = (0.1,0.1,0.1)
Great video. BTW you sound like Mark Zunckerberg
I don't think that's a compliment.
After 21 minutes, everything becomes confusing