@@leif1075 According to this video (ua-cam.com/video/Z2N5a7XZWg8/v-deo.html) learning something new activates stress responses in the brain. I can't speak of the veracity of that claim, but the video seems to be good.
the data revisionist and invisible point joke is hilarious ! these 2 guys makes it easier to understand ! very good idea to have this noob-expert exchange to help it make it more accessible!
Awesome video. This is one of the most intuitive explanations of SVM I've seen so far. And I'm coming from Andrew NG's course where he went about explaining it in a very roundabout way and I didn't grasp anything.
Still can not figure out why it is 1 or -1 if points on those two lines. If this is the case, what about the equation for those are not lie on the line, is something like W*U +B >1 or come from?
Because in the beginning of this lecture, he mentioned that this is a binary classification problem where the class variable has 2 levels i.e. + and - so he used +1 for + level and used -1 for - level. In this sense, the top grey line will be wX+b = 1 and all the + data points above this line are in the zone wX+b >= 1. Same applies to the bottom grey line. Hope this helps as I am also still learning.
The B is used as the intercept for the yellow line when the y=0, when they move them be to the gray lines they set them equal to -1 and 1 but the b's remain constant
The plane helps you to define the support vectors. The support vectors help you to define the plane. But when I start with only a set of points, with no support vectors or planes, what do I do? Who would I know who is who?
have the same question too can't understand how he wants to explain SVM while ignoring the most crucial part which is why is w transposed in the equation.
w is a vector (a matrix of n rows and one column) representing the parameters, and x is another vector (n*1 again) representing a point. To multiply them, since we know we can't multiply two matrices of dimensions n*1, and y has to be a single value (or a 1*1 matrix), multiplying w transpose with x gives you a 1*1 matrix which is a summation of the product of every value of the vector w with the corresponding value in the vector x.
w is a vector, or a matrix of dimension n*1. The convention in Machine Learning when we say vectors is that they represent column vectors. So both w and x are column vectors, and to get matrix multiplication of two vectors, we need to transform one of them to compatible shapes. So we do w.T * x which is equivalent to dot product of the two vectors.
Udacity system allows you to learn the new concepts little by little. If you want more information, check out the other videos. These guys have managed to make one of the hardest topics to be understood very easily!
I guess it is specifically Udacity's program. I have searched it for a long time and friend of mine working on Udacity told me it is their own specifically.
These two guys are hilarious! Eases the pain of learning :D
Why is learning a pain?
@@leif1075 According to this video (ua-cam.com/video/Z2N5a7XZWg8/v-deo.html) learning something new activates stress responses in the brain. I can't speak of the veracity of that claim, but the video seems to be good.
This format of learning through a dialog like this is fantastic! Thanks for posting :)
the data revisionist and invisible point joke is hilarious ! these 2 guys makes it easier to understand ! very good idea to have this noob-expert exchange to help it make it more accessible!
سن
Michael's comment at 2:32-2:47 was super helpful, esp. for folks with weaker linear algebra backgrounds.
Can you please explain. I wasn't able to understand
Excellent. Thank you for making this video. I needed to know about hyperplanes in the context of support vector machines. And you nailed it.
Awesome video. This is one of the most intuitive explanations of SVM I've seen so far. And I'm coming from Andrew NG's course where he went about explaining it in a very roundabout way and I didn't grasp anything.
Awesome and creative way to present by two genuine persons
Thank you very much for this video! It helps me get introduced to the conceptual understanding behind SVM!
I like the two man system where one guy acts like he barely has idea of what's going on (just like me).
good explanation!
Thanks, very useful information!
why the top line is wtx+b=1? i mean it should be equal to the distance between the plane and the first positive point?
because they want to classify positive points as +1 in order to distinguish the cases
Still can not figure out why it is 1 or -1 if points on those two lines. If this is the case, what about the equation for those are not lie on the line, is something like W*U +B >1 or come from?
Because in the beginning of this lecture, he mentioned that this is a binary classification problem where the class variable has 2 levels i.e. + and - so he used +1 for + level and used -1 for - level. In this sense, the top grey line will be wX+b = 1 and all the + data points above this line are in the zone wX+b >= 1. Same applies to the bottom grey line. Hope this helps as I am also still learning.
what does quiz mean at the end?
One question: Since the gray lines are different, shouldn't they have different b's (intercepts)? Like b1 and b2
The B is used as the intercept for the yellow line when the y=0, when they move them be to the gray lines they set them equal to -1 and 1 but the b's remain constant
The plane helps you to define the support vectors. The support vectors help you to define the plane. But when I start with only a set of points, with no support vectors or planes, what do I do? Who would I know who is who?
Why do we transpose w in the equation?
have the same question too can't understand how he wants to explain SVM while ignoring the most crucial part which is why is w transposed in the equation.
w is a vector (a matrix of n rows and one column) representing the parameters, and x is another vector (n*1 again) representing a point. To multiply them, since we know we can't multiply two matrices of dimensions n*1, and y has to be a single value (or a 1*1 matrix), multiplying w transpose with x gives you a 1*1 matrix which is a summation of the product of every value of the vector w with the corresponding value in the vector x.
You rock
please make video about SVR
what is SVR?
Why its w transpose ? Why are we transposing the values of matrix of feature x?
w is a vector, or a matrix of dimension n*1. The convention in Machine Learning when we say vectors is that they represent column vectors. So both w and x are column vectors, and to get matrix multiplication of two vectors, we need to transform one of them to compatible shapes. So we do w.T * x which is equivalent to dot product of the two vectors.
It didnt mentioned about the term "Support Vector Machine" , apart from the title of the graph.
Udacity system allows you to learn the new concepts little by little. If you want more information, check out the other videos. These guys have managed to make one of the hardest topics to be understood very easily!
5:39
How do they make the hand and the pen transparent?
I guess it is specifically Udacity's program. I have searched it for a long time and friend of mine working on Udacity told me it is their own specifically.
Excelente...
Where can i find more videos from the same topic?
Udacity
check out Georgia Tech Udacity course. You will not regret it!
why transpose of W?
same question
Because w and x are two vectors/matrices of the same dimensions. You would have to transpose one of them to multiply them.
in order to have valid multiplication, you need to transpose W matrix.
0:54 I'm going to fix that... by putting a minus sign here LMAO
hahaha they are just awesome
lol
Not useful at all!