Nice summarization and explanation of what the matrix form for simple linear regression model looks like, what it is made from, and how it can be constructed. Thanks! You have a good presentation/teaching style, IMO. I hope your channel grows, to help more people. Thanks again!
I did not understand 8:52 how was sum of xi sqared and n times xbar squared equal to sum of xi-xbar squared it should be xi^2 -xbar only let me explain you with an example if suppose x is -2 and mean is 1 then xi^2 - xbar would give you 3 while (xi-xbar)^2 will give you 9 that is (-2-1)^2 ..... on expanding (xi-xbar)^2 would be xi^2 - 2xixbar +xbar^2 not xi^2 - xbar
Hi janslesp, a good foray into the calculations and good work on applying the thinking in python. Have a go with a design matrix x that is not square and you will see that the equation does not work then. Your equation works if the matrix X is an invertible square matrix but it will not work when X is not square. try the example below. x = np.array([[1,2,3,3], [1,5,7,34], [1,4,3,6]]) y = np.array([2,6,7]) # beta_hat runs fine beta_hat = np.linalg.inv(x.T @ x) @ x.T @ y # this line of code will raise an error as x is not #square beta_hat_2 = np.linalg.inv(x) @ y
@@BoerCommander Hello Boer. I'm trying to understand these rules of this wonderful universe of linear algebra, Python and Machine Learning. You have no idea the pleasure of exchanging information with talents from other countries. I'm not a data science professional and I'm trying to overcome my difficulties. Thank you. Today I went back to playing a little on the computer. I used the code below which, as a rule, my X is not invertible matrix because X is not square and therefore not invertible. But I dont undestand: numpy makes the method ".I" in this case!! Does numpy have any algorithm to calculate a X.I matrix that simulates an inversion? Note that in the final result we get the same result: tetas_strange_equation = tetas_Normal_Equation The code is: import numpy as np X=np.matrix([ [1,35,70,0], [1,15,50,0], [1,42,80,0], [1,25,70,1], [1,28,90,1], [1,12,65,0], [1,34,72,0]]) y=np.matrix([90, 35, 98, 70, 62, 32,68]).T # Strange equation tetas_strange_equation=X.I@y print(tetas) # Normal equation tetas_Normal_Equation=(X.T@X).I@X.T@y print(tetas_Normal_Equation) OUTPUT: [[21.77437371] [ 2.47318883] [-0.37736477] [ 8.8753038 ]] [[21.77437371] [ 2.47318883] [-0.37736477] [ 8.8753038 ]]
Yes, my boer brother! You give us a good name. Well educated
Excellent explanation. Just what I needed!
I was looking for this breakdown for a long time. Thanks a lot mate
this what we call a math teacher!
One of the best education videos!!!!! GOD BLESSSS YOUU & YOUR BEAUTIFUL FAMILY BROTHER !!!!!!!!! PLS KEEP UP THE GOOD WORK!
Thank you so much for making this video. You just saved my Econometrics behind today.
Glad to help Charie! It brings me joy to know that I am helping.
I have seen many videos but this one explain a bit in more details like formulas used in linear regression. Great work!
I am data Science student. I wanted to see how this was derived. This summed it up perfectly! Thanks
Wow! I have been searching this lessons! I just found it and understand your easy way of explanation. 🙏 I keep flowing your channel.
Excellent explanation and demo! 👏🏻👏🏻👏🏻 Thank you so much.
This video deserves to be much much higher in youtube search results!
Nice summarization and explanation of what the matrix form for simple linear regression model looks like, what it is made from, and how it can be constructed. Thanks! You have a good presentation/teaching style, IMO. I hope your channel grows, to help more people. Thanks again!
Thanks man. You are a lifesaver
Keep up the neat work. Really good work.
Great and Elegant explanation. Thank you
hey! you were in my recommendation again!
Thank you so much
This helped a lot Thank you so much
0:00 Introduction and Design Matrix
02:00 Beta Hat Formula
02:42 The matrix X'X
06:24 Inverse of X'X
09:28 The matrix X'Y
Thanks!!!!
Thanks Boer
It is my pleasure to be of service gcuma.
I did not understand 8:52 how was sum of xi sqared and n times xbar squared equal to sum of xi-xbar squared it should be xi^2 -xbar only let me explain you with an example if suppose x is -2 and mean is 1 then xi^2 - xbar would give you 3 while (xi-xbar)^2 will give you 9 that is (-2-1)^2 ..... on expanding (xi-xbar)^2 would be xi^2 - 2xixbar +xbar^2 not xi^2 - xbar
this is amazing! where are you from?
Somethig strange:
the equation ...
BETA_HAT=X.I@Y
...in Python (Numpy) give us the same solution of ....
BETA_HAT=(X.T@X).I@X.T@Y.
Try it!
Hi janslesp, a good foray into the calculations and good work on applying the thinking in python. Have a go with a design matrix x that is not square and you will see that the equation does not work then.
Your equation works if the matrix X is an invertible square matrix but it will not work when X is not square.
try the example below.
x = np.array([[1,2,3,3],
[1,5,7,34],
[1,4,3,6]])
y = np.array([2,6,7])
# beta_hat runs fine
beta_hat = np.linalg.inv(x.T @ x) @ x.T @ y
# this line of code will raise an error as x is not #square
beta_hat_2 = np.linalg.inv(x) @ y
@@BoerCommander Hello Boer. I'm trying to understand these rules of this wonderful universe of linear algebra, Python and Machine Learning. You have no idea the pleasure of exchanging information with talents from other countries. I'm not a data science professional and I'm trying to overcome my difficulties. Thank you. Today I went back to playing a little on the computer. I used the code below which, as a rule, my X is not invertible matrix because X is not square and therefore not invertible. But I dont undestand: numpy makes the method ".I" in this case!! Does numpy have any algorithm to calculate a X.I matrix that simulates an inversion? Note that in the final result we get the same result: tetas_strange_equation = tetas_Normal_Equation
The code is:
import numpy as np
X=np.matrix([
[1,35,70,0],
[1,15,50,0],
[1,42,80,0],
[1,25,70,1],
[1,28,90,1],
[1,12,65,0],
[1,34,72,0]])
y=np.matrix([90, 35, 98, 70, 62, 32,68]).T
# Strange equation
tetas_strange_equation=X.I@y
print(tetas)
# Normal equation
tetas_Normal_Equation=(X.T@X).I@X.T@y
print(tetas_Normal_Equation)
OUTPUT:
[[21.77437371]
[ 2.47318883]
[-0.37736477]
[ 8.8753038 ]]
[[21.77437371]
[ 2.47318883]
[-0.37736477]
[ 8.8753038 ]]