Proof (part 3) minimizing squared error to regression line | Khan Academy

Поділитися
Вставка
  • Опубліковано 21 гру 2024

КОМЕНТАРІ • 25

  • @孟令軻
    @孟令軻 9 років тому +4

    I really love this series of statistic sagas, since everything i want to start a data analysis in excel or some other tools, these basic knowledge does help me with a better understanding of what i am doing

  • @oling2812
    @oling2812 11 років тому +8

    To everyone who's wondering where part 2 is, here's the link; (can't post the link so here's the url after the .com: /watch?v=f6OnoxctvUk

    • @SimplyAndy
      @SimplyAndy 3 роки тому

      This comment needs more votes. Relevant even after 8 years.

  • @aquilazyy1125
    @aquilazyy1125 4 роки тому

    This is given by from one of my homework I’m currently working on, a formula for computing m & b: let vector x= [x1,...,xn]^T, y = [y1,...,yn]^T, matrix A =[x, [1,...,1]^T], then [m,b]^T=(A^T•A)^{-1} • A^T •y.
    If you move the matrix inverse from the right to left and then expand out the matrix multiplication, it actually gives you the system of equation at the end of the video. I couldn’t understand why at first, but your video explain this much well.

  • @charliean9237
    @charliean9237 6 років тому +1

    at 10:31 the "two" points are actually the same points (x_bar^2/x_bar, (x_bar * y_bar)/x_bar) is exactly (x_bar, y_bar)

    • @muhammadaliburhanuddin9484
      @muhammadaliburhanuddin9484 5 років тому

      no, it's not the same points..

    • @aquilazyy1125
      @aquilazyy1125 4 роки тому

      It’s x^2_Bar not x_Bar^2. Remember that they are actually the sum of the n terms, so you can’t just cancel them out.

  • @ananthakrishnank3208
    @ananthakrishnank3208 Рік тому

    Elegant proof! Many thanks.

  • @NorwegianExplorer
    @NorwegianExplorer 9 років тому +12

    this playlist still needs some tidying up. Just move proof part 2 between part 1 and 3 please.

  • @gupta942
    @gupta942 4 роки тому

    Really Love ❤️ your explanation...!!!
    Thank you ...!!!

  • @andregomes2629
    @andregomes2629 5 років тому

    This is awesome, thanks Kan!

  • @DevShah-z2d
    @DevShah-z2d 10 місяців тому

    thanks

  • @80amnesia
    @80amnesia 3 роки тому

    thanks! You're always the best

  • @anweshadutta8782
    @anweshadutta8782 5 років тому

    Thank you so much.

  • @Mr.W.M.T.Madushanka
    @Mr.W.M.T.Madushanka Місяць тому

    Please add "Proof (part 2) minimizing squared error to regression line | Khan Academy" to list

  • @ad2181
    @ad2181 14 років тому

    Excellent insight.

  • @aditijuneja1848
    @aditijuneja1848 2 роки тому

    The conditions of partial derivatives being 0 will be true for maxima too. So then how will we know that our point is minima and not maxima?

  • @InternetDarkLord
    @InternetDarkLord 4 роки тому

    This playlist is out of order. Part 1 is #52, part 2 is #56, and part 3 is #53.

  • @clar331
    @clar331 5 років тому +1

    this goes too fast for me, im going to rewatch x(

  • @pradoshkumarsahu2706
    @pradoshkumarsahu2706 6 років тому +3

    Watch Part 2 here before you follow part 3:
    ua-cam.com/video/f6OnoxctvUk/v-deo.html

  • @ChimaevMikelson
    @ChimaevMikelson 7 місяців тому

    💯

  • @bartcase
    @bartcase 13 років тому

    you're awesome

  • @sagar7958
    @sagar7958 7 років тому +2

    how he decided,that the figure will be a 3D figure?

    • @mrroo9658
      @mrroo9658 7 років тому +11

      because there are two variables (m,b). when y is a function of only one variable, e.g y=f(x), it's a 2D graph, and when y is a function of two variables, e.g y=f(x,z), it's a 3D graph.