I really love this series of statistic sagas, since everything i want to start a data analysis in excel or some other tools, these basic knowledge does help me with a better understanding of what i am doing
This is given by from one of my homework I’m currently working on, a formula for computing m & b: let vector x= [x1,...,xn]^T, y = [y1,...,yn]^T, matrix A =[x, [1,...,1]^T], then [m,b]^T=(A^T•A)^{-1} • A^T •y. If you move the matrix inverse from the right to left and then expand out the matrix multiplication, it actually gives you the system of equation at the end of the video. I couldn’t understand why at first, but your video explain this much well.
because there are two variables (m,b). when y is a function of only one variable, e.g y=f(x), it's a 2D graph, and when y is a function of two variables, e.g y=f(x,z), it's a 3D graph.
I really love this series of statistic sagas, since everything i want to start a data analysis in excel or some other tools, these basic knowledge does help me with a better understanding of what i am doing
To everyone who's wondering where part 2 is, here's the link; (can't post the link so here's the url after the .com: /watch?v=f6OnoxctvUk
This comment needs more votes. Relevant even after 8 years.
This is given by from one of my homework I’m currently working on, a formula for computing m & b: let vector x= [x1,...,xn]^T, y = [y1,...,yn]^T, matrix A =[x, [1,...,1]^T], then [m,b]^T=(A^T•A)^{-1} • A^T •y.
If you move the matrix inverse from the right to left and then expand out the matrix multiplication, it actually gives you the system of equation at the end of the video. I couldn’t understand why at first, but your video explain this much well.
at 10:31 the "two" points are actually the same points (x_bar^2/x_bar, (x_bar * y_bar)/x_bar) is exactly (x_bar, y_bar)
no, it's not the same points..
It’s x^2_Bar not x_Bar^2. Remember that they are actually the sum of the n terms, so you can’t just cancel them out.
Elegant proof! Many thanks.
this playlist still needs some tidying up. Just move proof part 2 between part 1 and 3 please.
Really Love ❤️ your explanation...!!!
Thank you ...!!!
This is awesome, thanks Kan!
thanks
thanks! You're always the best
Thank you so much.
Please add "Proof (part 2) minimizing squared error to regression line | Khan Academy" to list
Excellent insight.
The conditions of partial derivatives being 0 will be true for maxima too. So then how will we know that our point is minima and not maxima?
This playlist is out of order. Part 1 is #52, part 2 is #56, and part 3 is #53.
this goes too fast for me, im going to rewatch x(
Watch Part 2 here before you follow part 3:
ua-cam.com/video/f6OnoxctvUk/v-deo.html
💯
you're awesome
how he decided,that the figure will be a 3D figure?
because there are two variables (m,b). when y is a function of only one variable, e.g y=f(x), it's a 2D graph, and when y is a function of two variables, e.g y=f(x,z), it's a 3D graph.