Linear Regression From Scratch in Python (Mathematical)

Поділитися
Вставка
  • Опубліковано 15 чер 2024
  • In this video we implement the linear regression algorithm from scratch. This episode is highly mathematical.
    ◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾
    📚 Programming Books & Merch 📚
    🐍 The Python Bible Book: www.neuralnine.com/books/
    💻 The Algorithm Bible Book: www.neuralnine.com/books/
    👕 Programming Merch: www.neuralnine.com/shop
    🌐 Social Media & Contact 🌐
    📱 Website: www.neuralnine.com/
    📷 Instagram: / neuralnine
    🐦 Twitter: / neuralnine
    🤵 LinkedIn: / neuralnine
    📁 GitHub: github.com/NeuralNine
    🎙 Discord: / discord
    🎵 Outro Music From: www.bensound.com/
    Timestamps:
    (0:00) Intro
    (0:19) Mathematical Theory
    (12:48) Implementation From Scratch
    (24:05) Outro
  • Наука та технологія

КОМЕНТАРІ • 131

  • @MrTheBroMoe
    @MrTheBroMoe 2 роки тому +192

    Please do more mathematical implementations like this please! This really helped me to understand the math behind these algorithms

  • @PML37
    @PML37 5 днів тому +1

    That was just wow! The way you explained it was amazing.
    Thank you!

  • @kapilsonyt
    @kapilsonyt 2 роки тому +3

    I recently stumbled upon this while looking for similar approach via python and I subscribed you now.
    Thank you very much for imparting knowledge.

  • @Nasrullah0
    @Nasrullah0 7 місяців тому +3

    this video is amazing, good job, im now actually thinking about re visiting the math classes i couldn't take before in order to get better at these machine learning algorithims.

  • @Ryguy12543
    @Ryguy12543 Рік тому +1

    Thank you so much. Im working through problem set for a Neural Analysis class and this really helps. Great video.

  • @Ossigen8
    @Ossigen8 2 місяці тому +2

    I don’t comment often videos but this one is really good, you explained the math concept really well by doing the examples on the graph etc… Good stuff in this channel !

  • @vardhanr8177
    @vardhanr8177 2 роки тому +10

    AMAZING video!!! I love these videos which teach theory too! Thank youuu!!!

  • @santrix1595
    @santrix1595 Рік тому +3

    dude Slammer video!! love the fact that you made the math interesting and super easy to understand

  • @graphicsguy1965
    @graphicsguy1965 Рік тому +3

    Thanks man! Awesome video that dumbed it down enough for me. Could you do linear regression of fitting a sphere to points given the sphere radius (or not) like is used in terrestrial laser scanning? If you've never messed with it, you place spheres around what you are scanning, then the software uses the spheres to align all of the different scans at different locations together.

  • @SabarishanM
    @SabarishanM 14 днів тому

    I was searching everywhere and final found what I need. Your video really clears up the fundamentals of creating Linear Regression Model. Thank you

  • @kenadams2261
    @kenadams2261 2 роки тому +1

    Amazing video, looking forward for more implementations from scratch!

  • @AkhilendraOjhaBME
    @AkhilendraOjhaBME Рік тому +2

    Greatly explained in simple words, looking forward to learn more methods from you.

  • @TheVerbalAxiom
    @TheVerbalAxiom 2 роки тому +1

    I am subscribing, definitely. You have taught in the most straight forward and explanatory way a concept that other videos made a bit too complicated. Thank you!

  • @zukofire6424
    @zukofire6424 2 роки тому +7

    Thanks very much for this!! I am a data analysis student and close to giving up but still hanging on!

  • @davidmedina1886
    @davidmedina1886 2 роки тому +2

    This is a great video! Thank you so much!

  • @happypotato4252
    @happypotato4252 5 місяців тому

    Excellent explanation of the implementation of linear regression, thanks!!

  • @fraggodgaming4433
    @fraggodgaming4433 Рік тому

    Thank you so much man! Great vid...Keep up the good work!

  • @zezestyles6215
    @zezestyles6215 Рік тому

    Thank youuu :) Loved the way you explained it.

  • @lashrack4548
    @lashrack4548 Рік тому

    You got a new subscriber... Best in detail explanation ever!!!

  • @naitikpatil243
    @naitikpatil243 2 роки тому +2

    Waiting for series of such videos 🤩

  • @madi6736
    @madi6736 3 місяці тому

    your explanations are really clear and precise, thank you very much♥♥♥

  • @cambridgebreaths3581
    @cambridgebreaths3581 2 роки тому +49

    I do like all your videos that contain the keywords "mathematical theory" and "from scratch" :). Please do more similar videos. Thank you kindly

    • @xKreesherZ
      @xKreesherZ 2 роки тому +3

      same, stuff from scratch is the best stuff for learning

    • @fater8711
      @fater8711 2 роки тому +3

      @@xKreesherZ yeah, and it's really fun
      I am always so proud when I am able to code something from scratch

  • @anonymousvevo8697
    @anonymousvevo8697 Рік тому

    the math part as awesome! thanks a lot very clear and simple

  • @throgmortonartstudio2402
    @throgmortonartstudio2402 2 роки тому

    That was so much easier to understand. thank you.

  • @unknowncorsairtim
    @unknowncorsairtim 10 місяців тому +3

    Awesome tutorial! Could you please explain why you use gradient descent to minimize squared error instead of using the formula: divide the standard deviation of y values by the standard deviation of x values and then multiply this by the correlation between x and y?

  • @sryps
    @sryps 2 роки тому +2

    love the videos, finally

  • @archer9056
    @archer9056 2 місяці тому

    Amazing simple explanation of linear regression. Please also cover the other techniques in same way

  • @omeshnath6826
    @omeshnath6826 8 місяців тому

    got a good idea about linear regression thnx bud!!

  • @amirnaeej7986
    @amirnaeej7986 6 місяців тому

    You’re a life saver thank you soooo much❤❤

  • @lamiyanabaova3020
    @lamiyanabaova3020 Рік тому

    Thanks a lot for great explanation!

  • @milorday5141
    @milorday5141 Рік тому

    Thank you, very useful and clear.

  • @mysticrustacean4065
    @mysticrustacean4065 10 місяців тому

    Brother! I simply love you after I came across this video!

  • @drewprof
    @drewprof 2 місяці тому

    Nicely done!

  • @cryptoemperor3399
    @cryptoemperor3399 2 роки тому

    Thanks a bunch professor taught this in class I was like a deer with headlights coming, but after you video I have understanding.

  • @vallabhshelar3176
    @vallabhshelar3176 2 роки тому

    loved it bro u r just amazing

  • @kuroisan2698
    @kuroisan2698 Рік тому

    great video NeuralNine
    I feel this the gradiant descent. is not it ?
    if It is, is there an implementation for the least square
    because I feel it is just so random you take some values and you choose the least one from those random values not necessarily the least value you can get

  • @yxyyy
    @yxyyy 2 роки тому +4

    Next for logistic regression please!

  • @user-tr5vv6gf6l
    @user-tr5vv6gf6l 9 місяців тому +1

    This is amazing!
    How can I store the current m and b values for every iteration?

  • @actualBIAS
    @actualBIAS 4 місяці тому

    Bruder, danke dir. Es wollte nicht in meinen Kopf. Dank dir habe ich das verstanden.

  • @chadstrachan9696
    @chadstrachan9696 5 місяців тому

    Great video, very informative

  • @sajid_ahamed
    @sajid_ahamed 2 роки тому

    awesome video and dark mode paint rocksss

  • @donowen9848
    @donowen9848 6 місяців тому

    Awesome representation!!!

  • @bluestar2253
    @bluestar2253 2 роки тому +13

    Yes, please do more videos like this one. Even Einstein gives it 2 thumbs up! :)

  • @bharthyadav6794
    @bharthyadav6794 Рік тому +1

    yoo out all explanations i saw u did the best work this is what i wanted best work man subbed

  • @allenallen5136
    @allenallen5136 Рік тому +1

    Combine theory with practice,and show the calculate procedure,really good👍

  • @minuet6919
    @minuet6919 2 роки тому

    Lmao, I finished summer school for algebra 1 and we just learned this and to calculate it, we used a calculator this is a amazing video the mathematics where correct and your explanation is amazing!

  • @adejumoadeniyi5152
    @adejumoadeniyi5152 2 роки тому

    This was really helpful

  • @atrumluminarium
    @atrumluminarium 2 роки тому +7

    It would have been nice to compare it to the analytical solution of least squares regression being (Xᵀ•X)⁻¹•(Xᵀ•Y) just to show they're identical

    • @andreagalloni92
      @andreagalloni92 Рік тому

      quicker and theoretically more correct for solving this problem!

    • @JohnMacPherson-hr4yz
      @JohnMacPherson-hr4yz Рік тому +1

      @@andreagalloni92 could you make a video and share it with everyone please? (theory and python code)

    • @andreagalloni92
      @andreagalloni92 Рік тому +1

      @@JohnMacPherson-hr4yz mmm... Maybe I can :)
      I'll try to do it

  • @curtezyt1984
    @curtezyt1984 6 місяців тому +1

    on 10:00 we need to take derivative of each weight (M) like this, first for M1 then for M2 then M3.... to Mn

  • @shubhamchoudhary5461
    @shubhamchoudhary5461 2 роки тому

    that's pretty good lecture.. 😍

  • @sriram2k4
    @sriram2k4 Рік тому

    Amazing video!!!!

  • @debashishsarkar649
    @debashishsarkar649 Рік тому +2

    Hi sir , it's really helps me to understand linear regression and build it by myself from scratch. Thanks 👍 can you describe all the models from scratch? Like it's

  • @nilothpalbhattacharya8230
    @nilothpalbhattacharya8230 Рік тому

    this is the real algorithm. great :)

  • @baivabdattamajumder6568
    @baivabdattamajumder6568 Рік тому

    If you're counting from 0 to n, it means you have (n+1) data points, so should the dividing factor not be 1/(n+1) while calculating mean of squared errors at 4:43?

  • @leul1407
    @leul1407 8 місяців тому

    Thanks man

  • @ahmedifhaam7266
    @ahmedifhaam7266 2 роки тому

    dude did my 6 hours lecture + lab class all in just 24 minutes, bruh. Ez Clap hahah, thanks a lot.

  • @thomashoughton228
    @thomashoughton228 2 роки тому +1

    Why does 1/n become -2/n. As I read in towards data science that it would be 1/n and then the -2 would be after the sum symbol as 2x'i' so like why or how does 1/n become -2/n. Also great video btw, helped me unlock a few skills in the ml skill tree :)

  • @lucasgonzalezsonnenberg3204
    @lucasgonzalezsonnenberg3204 11 місяців тому

    Hi, I really like your videos. One question, what is h?
    Thank you!

  • @gamingwithdingo
    @gamingwithdingo 4 дні тому +1

    I love this guy

  • @kaleemahmed3300
    @kaleemahmed3300 2 роки тому

    Great explanation
    Upload more model implementation plz

  • @farukesatergin8129
    @farukesatergin8129 Рік тому +1

    Where can we get the csv file he worked on?

  • @davidisaacgalang3000
    @davidisaacgalang3000 Рік тому

    Thank you very much

  • @hayki_ds
    @hayki_ds Рік тому

    perfect job

  • @nikhdzia
    @nikhdzia 2 роки тому

    Please do more such videos

  • @piyush9555
    @piyush9555 Рік тому

    wow, that is exactly what is called simplicity

  • @fishfanaticgokul4758
    @fishfanaticgokul4758 Рік тому

    bro how does the model created by these mathematical output and how does the predict function works
    ;

  • @mohammadf9646
    @mohammadf9646 3 місяці тому +1

    well done

  • @tusharvashistha190
    @tusharvashistha190 Рік тому

    amazing content

  • @munazirhussain5450
    @munazirhussain5450 4 місяці тому

    Which software you use for the video ?

  • @deveshjain9543
    @deveshjain9543 11 місяців тому

    I implemented it myself and it came out to be more accurate then sckit-learn
    import numpy as np
    from sympy import Symbol, solve, diff
    class LinearRegression:
    def __init__(self):
    self.w = {}
    def fit(self, x, y):
    for i in np.arange(x.shape[1] + 1):
    self.w[f"w{i}"] = Symbol(f"w{i}")
    e = 0
    for i in range(len(x)):
    yp = 0
    for j in np.arange(len(self.w)):
    if j == 0:
    yp += self.w[f"w{j}"]
    else:
    yp += self.w[f"w{j}"] * x[i][j-1]
    e += (yp-y[i]) ** 2
    eq = []
    for i in np.arange(len(self.w)):
    eq.append(diff(e, self.w[f"w{i}"]))
    w = solve(eq, list(self.w.keys()))
    for i in np.arange(len(self.w)):
    self.w[f"w{i}"] = w[self.w[f"w{i}"]]
    def predict(self, x):
    def prediction(features):
    yp = 0
    for i in np.arange(len(self.w)):
    if i == 0:
    yp += self.w[f"w{i}"]
    else:
    yp += self.w[f"w{i}"] * features[i-1]
    return yp
    return list(map(prediction, x))

  • @duyanh4186
    @duyanh4186 Рік тому

    i love this video

  • @alihekmat2517
    @alihekmat2517 2 роки тому

    You're great bro

  • @MultiCodFTW
    @MultiCodFTW Рік тому +1

    @NeuralNine How do I test the trained models to give a prediction based on a newly given value/independent variable (a value that is foreign to the training set)?
    Great video btw!

    • @adrianbitsinnie1537
      @adrianbitsinnie1537 Рік тому

      Yes! This is a good question I hope he answers. If he already has can someone link me please? :)

  • @ghost_riderrr
    @ghost_riderrr 2 місяці тому

    why do we need to print epoch values??

  • @anidea8012
    @anidea8012 Рік тому

    please provide the dataset which you use bcz it will us to follow along with you

  • @shashwatbalodhi4042
    @shashwatbalodhi4042 11 місяців тому

    I am getting error
    AttributeError Traceback (most recent call last)
    Cell In[10], line 39
    37 if i % 50 == 0:
    38 print(f"Epoch: {i}")
    ---> 39 m, b = gradient_descent(m, b, data, L)
    41 print(m, b)
    43 plt.scatter(data.studytime, data.score, color = "black")

  • @rockymani94
    @rockymani94 2 роки тому

    Lstm ,rnn , logistics and more we are expecting more from you

  • @eslamashraf5847
    @eslamashraf5847 Рік тому +1

    Amazing explaining, thanks a lot!
    Can you upload the csv file and send me the link, please ?

  • @roomian
    @roomian Рік тому

    I dont understand the reasoning behind [m * x * b for x in range(20, 80)] why is this the y axis. What does it represent. Why the equation? Also why arent we just reading the found values of x and y? I have understood all the calculations being done but I dont understand the visualization/graph representation part. Please let me know if you see this.

    • @OfficialMazLi
      @OfficialMazLi 8 місяців тому

      He did it all wrong just ignore it imo

  • @chrisdalain6979
    @chrisdalain6979 Рік тому

    Woooow that's great

  • @dronedangwal447
    @dronedangwal447 10 місяців тому +1

    Nice video! Could you please provide the dataset that you used

  • @mehdibouhamidi4675
    @mehdibouhamidi4675 2 роки тому

    can you do the same for Support vector machine

  • @muhammadhammadsarwar698
    @muhammadhammadsarwar698 Місяць тому

    Can Anyone tell me some practical usecase or some example where we can implement this? It should be great if anyone can give me full scenario

  • @aandresriera7927
    @aandresriera7927 2 роки тому

    LSTM from scratch! 🙏😍

  • @phoneix24886
    @phoneix24886 10 місяців тому

    why did we do a partial derivative?

  • @deepakdubey3973
    @deepakdubey3973 Рік тому

    Thanks

  • @sneha.tiwari
    @sneha.tiwari Рік тому

    Hello, do you have a video or notes on plotting the linear regression when attributes in the input data points are more than one. Say 10 columns of x, and 1 column of y??
    Please respond.
    Thank you.

    • @sairekhaunnam3001
      @sairekhaunnam3001 Рік тому

      I too need explanation on this. It will be really helpful

    • @sneha.tiwari
      @sneha.tiwari Рік тому

      @@sairekhaunnam3001 Hey, if there is one attribute the we can plot it on 2D, if two attributes, then in 3D, and for three attribute, we will plot in 4D, which is not possible visually. That's why we restrict ourselves.

  • @zombiekiller7101
    @zombiekiller7101 2 роки тому +2

    Nice video as usual :).But I don't know calculus 😥

  • @Graverman
    @Graverman 2 роки тому +1

    here to learn something that I already know but the video is too entertaining to click off

  • @joshuarobinson2338
    @joshuarobinson2338 2 роки тому

    Good job man, can you put the code in pdf format ? again awesome video.

  • @julianavarela4936
    @julianavarela4936 8 місяців тому

    A savior

  • @shaileshchauhan7855
    @shaileshchauhan7855 2 роки тому

    Why did you decrease the no of epocs ? And doesn't minima can be found with partial derivative =0 ?

    • @CptAJbanned
      @CptAJbanned Рік тому

      The minima of one of the partial derivatives wont necessarily be the minima for the loss function since it also depends on the other variables/features

  • @quaka96
    @quaka96 2 роки тому +1

    Why the gradient descent method? The pseudo inverse solves this immediately

    • @CptAJbanned
      @CptAJbanned Рік тому

      Make a video about it and link me, plz

  • @user-cw8cw3fq7k
    @user-cw8cw3fq7k 2 місяці тому

    Do Multiple Linear Regression!!!!!

  • @navardowilliams7484
    @navardowilliams7484 Рік тому

    You sound like tech with Tim😂😂😂

  • @keyurpancholi4277
    @keyurpancholi4277 Рік тому

    where is the csv file?

  • @annawilson3824
    @annawilson3824 Рік тому

    What's the point of printing Epochs if we see nothing on the screen in that regard lol, why is it highly mathematical if we do not even derive anything? I would redo the video with loss_function being in the print, o/w it just hangs in the code for God knows what reason.

  • @philtoa334
    @philtoa334 Рік тому

    Thx_nice.

  • @simssim262
    @simssim262 2 роки тому +1

    Pls pls make one for neural networks

  • @davidlanda2324
    @davidlanda2324 8 місяців тому

    You should not to "think" that it is a best line, you should verify it!