Linear Regression From Scratch in Python (Mathematical)
Вставка
- Опубліковано 15 чер 2024
- In this video we implement the linear regression algorithm from scratch. This episode is highly mathematical.
◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾
📚 Programming Books & Merch 📚
🐍 The Python Bible Book: www.neuralnine.com/books/
💻 The Algorithm Bible Book: www.neuralnine.com/books/
👕 Programming Merch: www.neuralnine.com/shop
🌐 Social Media & Contact 🌐
📱 Website: www.neuralnine.com/
📷 Instagram: / neuralnine
🐦 Twitter: / neuralnine
🤵 LinkedIn: / neuralnine
📁 GitHub: github.com/NeuralNine
🎙 Discord: / discord
🎵 Outro Music From: www.bensound.com/
Timestamps:
(0:00) Intro
(0:19) Mathematical Theory
(12:48) Implementation From Scratch
(24:05) Outro - Наука та технологія
Please do more mathematical implementations like this please! This really helped me to understand the math behind these algorithms
😭😭
That was just wow! The way you explained it was amazing.
Thank you!
I recently stumbled upon this while looking for similar approach via python and I subscribed you now.
Thank you very much for imparting knowledge.
this video is amazing, good job, im now actually thinking about re visiting the math classes i couldn't take before in order to get better at these machine learning algorithims.
Thank you so much. Im working through problem set for a Neural Analysis class and this really helps. Great video.
I don’t comment often videos but this one is really good, you explained the math concept really well by doing the examples on the graph etc… Good stuff in this channel !
AMAZING video!!! I love these videos which teach theory too! Thank youuu!!!
dude Slammer video!! love the fact that you made the math interesting and super easy to understand
Thanks man! Awesome video that dumbed it down enough for me. Could you do linear regression of fitting a sphere to points given the sphere radius (or not) like is used in terrestrial laser scanning? If you've never messed with it, you place spheres around what you are scanning, then the software uses the spheres to align all of the different scans at different locations together.
I was searching everywhere and final found what I need. Your video really clears up the fundamentals of creating Linear Regression Model. Thank you
Amazing video, looking forward for more implementations from scratch!
Greatly explained in simple words, looking forward to learn more methods from you.
I am subscribing, definitely. You have taught in the most straight forward and explanatory way a concept that other videos made a bit too complicated. Thank you!
Thanks very much for this!! I am a data analysis student and close to giving up but still hanging on!
This is a great video! Thank you so much!
Excellent explanation of the implementation of linear regression, thanks!!
Thank you so much man! Great vid...Keep up the good work!
Thank youuu :) Loved the way you explained it.
You got a new subscriber... Best in detail explanation ever!!!
Waiting for series of such videos 🤩
your explanations are really clear and precise, thank you very much♥♥♥
I do like all your videos that contain the keywords "mathematical theory" and "from scratch" :). Please do more similar videos. Thank you kindly
same, stuff from scratch is the best stuff for learning
@@xKreesherZ yeah, and it's really fun
I am always so proud when I am able to code something from scratch
the math part as awesome! thanks a lot very clear and simple
That was so much easier to understand. thank you.
Awesome tutorial! Could you please explain why you use gradient descent to minimize squared error instead of using the formula: divide the standard deviation of y values by the standard deviation of x values and then multiply this by the correlation between x and y?
love the videos, finally
Amazing simple explanation of linear regression. Please also cover the other techniques in same way
got a good idea about linear regression thnx bud!!
You’re a life saver thank you soooo much❤❤
Thanks a lot for great explanation!
Thank you, very useful and clear.
Brother! I simply love you after I came across this video!
Nicely done!
Thanks a bunch professor taught this in class I was like a deer with headlights coming, but after you video I have understanding.
loved it bro u r just amazing
great video NeuralNine
I feel this the gradiant descent. is not it ?
if It is, is there an implementation for the least square
because I feel it is just so random you take some values and you choose the least one from those random values not necessarily the least value you can get
Next for logistic regression please!
This is amazing!
How can I store the current m and b values for every iteration?
Bruder, danke dir. Es wollte nicht in meinen Kopf. Dank dir habe ich das verstanden.
Great video, very informative
awesome video and dark mode paint rocksss
Awesome representation!!!
Yes, please do more videos like this one. Even Einstein gives it 2 thumbs up! :)
yoo out all explanations i saw u did the best work this is what i wanted best work man subbed
Combine theory with practice,and show the calculate procedure,really good👍
Lmao, I finished summer school for algebra 1 and we just learned this and to calculate it, we used a calculator this is a amazing video the mathematics where correct and your explanation is amazing!
This was really helpful
It would have been nice to compare it to the analytical solution of least squares regression being (Xᵀ•X)⁻¹•(Xᵀ•Y) just to show they're identical
quicker and theoretically more correct for solving this problem!
@@andreagalloni92 could you make a video and share it with everyone please? (theory and python code)
@@JohnMacPherson-hr4yz mmm... Maybe I can :)
I'll try to do it
on 10:00 we need to take derivative of each weight (M) like this, first for M1 then for M2 then M3.... to Mn
that's pretty good lecture.. 😍
Amazing video!!!!
Hi sir , it's really helps me to understand linear regression and build it by myself from scratch. Thanks 👍 can you describe all the models from scratch? Like it's
this is the real algorithm. great :)
If you're counting from 0 to n, it means you have (n+1) data points, so should the dividing factor not be 1/(n+1) while calculating mean of squared errors at 4:43?
Thanks man
dude did my 6 hours lecture + lab class all in just 24 minutes, bruh. Ez Clap hahah, thanks a lot.
Why does 1/n become -2/n. As I read in towards data science that it would be 1/n and then the -2 would be after the sum symbol as 2x'i' so like why or how does 1/n become -2/n. Also great video btw, helped me unlock a few skills in the ml skill tree :)
Hi, I really like your videos. One question, what is h?
Thank you!
I love this guy
Great explanation
Upload more model implementation plz
Where can we get the csv file he worked on?
Thank you very much
perfect job
Please do more such videos
wow, that is exactly what is called simplicity
bro how does the model created by these mathematical output and how does the predict function works
;
well done
amazing content
Which software you use for the video ?
I implemented it myself and it came out to be more accurate then sckit-learn
import numpy as np
from sympy import Symbol, solve, diff
class LinearRegression:
def __init__(self):
self.w = {}
def fit(self, x, y):
for i in np.arange(x.shape[1] + 1):
self.w[f"w{i}"] = Symbol(f"w{i}")
e = 0
for i in range(len(x)):
yp = 0
for j in np.arange(len(self.w)):
if j == 0:
yp += self.w[f"w{j}"]
else:
yp += self.w[f"w{j}"] * x[i][j-1]
e += (yp-y[i]) ** 2
eq = []
for i in np.arange(len(self.w)):
eq.append(diff(e, self.w[f"w{i}"]))
w = solve(eq, list(self.w.keys()))
for i in np.arange(len(self.w)):
self.w[f"w{i}"] = w[self.w[f"w{i}"]]
def predict(self, x):
def prediction(features):
yp = 0
for i in np.arange(len(self.w)):
if i == 0:
yp += self.w[f"w{i}"]
else:
yp += self.w[f"w{i}"] * features[i-1]
return yp
return list(map(prediction, x))
i love this video
You're great bro
@NeuralNine How do I test the trained models to give a prediction based on a newly given value/independent variable (a value that is foreign to the training set)?
Great video btw!
Yes! This is a good question I hope he answers. If he already has can someone link me please? :)
why do we need to print epoch values??
please provide the dataset which you use bcz it will us to follow along with you
I am getting error
AttributeError Traceback (most recent call last)
Cell In[10], line 39
37 if i % 50 == 0:
38 print(f"Epoch: {i}")
---> 39 m, b = gradient_descent(m, b, data, L)
41 print(m, b)
43 plt.scatter(data.studytime, data.score, color = "black")
Lstm ,rnn , logistics and more we are expecting more from you
Amazing explaining, thanks a lot!
Can you upload the csv file and send me the link, please ?
I dont understand the reasoning behind [m * x * b for x in range(20, 80)] why is this the y axis. What does it represent. Why the equation? Also why arent we just reading the found values of x and y? I have understood all the calculations being done but I dont understand the visualization/graph representation part. Please let me know if you see this.
He did it all wrong just ignore it imo
Woooow that's great
Nice video! Could you please provide the dataset that you used
yup
can you do the same for Support vector machine
Can Anyone tell me some practical usecase or some example where we can implement this? It should be great if anyone can give me full scenario
LSTM from scratch! 🙏😍
why did we do a partial derivative?
Thanks
Hello, do you have a video or notes on plotting the linear regression when attributes in the input data points are more than one. Say 10 columns of x, and 1 column of y??
Please respond.
Thank you.
I too need explanation on this. It will be really helpful
@@sairekhaunnam3001 Hey, if there is one attribute the we can plot it on 2D, if two attributes, then in 3D, and for three attribute, we will plot in 4D, which is not possible visually. That's why we restrict ourselves.
Nice video as usual :).But I don't know calculus 😥
here to learn something that I already know but the video is too entertaining to click off
Good job man, can you put the code in pdf format ? again awesome video.
A savior
Why did you decrease the no of epocs ? And doesn't minima can be found with partial derivative =0 ?
The minima of one of the partial derivatives wont necessarily be the minima for the loss function since it also depends on the other variables/features
Why the gradient descent method? The pseudo inverse solves this immediately
Make a video about it and link me, plz
Do Multiple Linear Regression!!!!!
You sound like tech with Tim😂😂😂
where is the csv file?
What's the point of printing Epochs if we see nothing on the screen in that regard lol, why is it highly mathematical if we do not even derive anything? I would redo the video with loss_function being in the print, o/w it just hangs in the code for God knows what reason.
Thx_nice.
Pls pls make one for neural networks
yeap yeap and please explaint more details in it
You should not to "think" that it is a best line, you should verify it!