Stanford CS229: Machine Learning - Linear Regression and Gradient Descent | Lecture 2 (Autumn 2018)

Поділитися
Вставка
  • Опубліковано 4 вер 2024

КОМЕНТАРІ • 268

  • @krishyket
    @krishyket Рік тому +331

    Dude is a multi-millionaire and took valuable time meticulously teaching students and us. Legend.

    • @The_Quaalude
      @The_Quaalude 7 місяців тому +25

      Bro needs to train his future employees

    • @vikram-aditya
      @vikram-aditya 7 місяців тому +5

      yes bro. i think the more people with the knowledge, the faster the breakthroughs in the field

    • @clerpington_the_fifth
      @clerpington_the_fifth 7 місяців тому +3

      ...and FOR FREE.

    • @user-ez7jl6ts8x
      @user-ez7jl6ts8x Місяць тому +9

      To people like him, money is really irrelevent. These people are really top 0.00001 of people of the world, all that matters to them is how they can contribute to their respective field and help make this world a better place, money is just by-product of that passsion.

  • @Eric-zo8wo
    @Eric-zo8wo Рік тому +210

    0:41: 📚 This class will cover linear regression, batch and stochastic gradient descent, and the normal equations as algorithms for fitting linear regression models.
    5:35: 🏠 The speaker discusses using multiple input features, such as size and number of bedrooms, to estimate the size of a house.
    12:03: 📝 The hypothesis is defined as the sum of features multiplied by parameters.
    18:40: 📉 Gradient descent is a method to minimize a function J of Theta by iteratively updating the values of Theta.
    24:21: 📝 Gradient descent is a method used to update values in each step by calculating the partial derivative of the cost function.
    30:13: 📝 The partial derivative of a term with respect to Theta J is equal to XJ, and one step of gradient descent updates Theta J
    36:08: 🔑 The choice of learning rate in the algorithm affects its convergence to the global minimum.
    41:45: 📊 Batch gradient descent is a method in machine learning where the entire training set is processed as one batch, but it has a disadvantage when dealing with large datasets.
    47:13: 📈 Stochastic gradient descent allows for faster progress in large datasets but never fully converges.
    52:23: 📝 Gradient descent is an iterative algorithm used to find the global optimum, but for linear regression, the normal equation can be used to directly jump to the global optimum.
    58:59: 📝 The derivative of a matrix function with respect to the matrix itself is a matrix with the same dimensions, where each element is the derivative with respect to the corresponding element in the original matrix.
    1:05:51: 📝 The speaker discusses properties of matrix traces and their derivatives.
    1:13:17: 📝 The derivative of the function is equal to one-half times the derivative of Theta multiplied by the transpose of X minus the transpose of y.
    Recap by Tammy AI

    • @Lucky-vm9dv
      @Lucky-vm9dv 11 місяців тому +6

      How much we have to pay for your valuable overview on the entire class?
      Kudos to your efforts 👍

    • @MLLearner
      @MLLearner 4 місяці тому

      Thank you so much 👍🫡

    • @sarkersaadahmed
      @sarkersaadahmed 2 місяці тому

      Legend

    • @surajr4757
      @surajr4757 Місяць тому

      @@Lucky-vm9dv Bro didn't read the last line, Recap by Tammy AI🙂

  • @abhishekagrawal896
    @abhishekagrawal896 3 місяці тому +12

    🎯 Key points for quick navigation:
    00:03 *🏠 Introduction to Linear Regression*
    - Linear regression is a learning algorithm used to fit linear models.
    - Motivation for linear regression is explained through a supervised learning problem.
    - Collecting a dataset, defining notations, and building a regression model are important steps.
    04:04 *📊 Designing a Learning Algorithm*
    - The process of supervised learning involves inputting a training set and outputting a hypothesis.
    - Key decisions in designing a machine learning algorithm include defining the hypothesis representation.
    - Understanding the workflow, dataset, and hypothesis structure is crucial in creating a successful learning algorithm.
    07:19 *🏡 Multiple Features in Linear Regression*
    - Introducing multiple input features in linear regression models.
    - The importance of adding additional features like the number of bedrooms to enhance prediction accuracy.
    - Notation, such as defining a dummy feature for simplifying hypotheses, is explained.
    13:03 *🎯 Cost Function and Parameter Optimization*
    - Choosing parameter values Theta to minimize the cost function J of Theta.
    - The squared error is used in linear regression as a measure of prediction accuracy.
    - Parameters are iteratively adjusted using gradient descent to find the optimal values for the model.
    24:18 *🧮 Linear Regression: Gradient Descent Overview*
    Explanation of gradient descent in each step:
    - Update Theta values for each feature based on the learning rate and partial derivative of the cost function.
    - Learning rate determination for practical applications.
    - Detailed explanation of the derivative calculation for one training example.
    27:11 *📈 Gradient Descent Algorithm*
    Derivation of the partial derivative with respect to Theta.
    - Calculating the partial derivative for a simple training example.
    - Update equation for each step of gradient descent using the calculated derivative.
    33:11 *📉 Optimization: Convergence and Learning Rate*
    Concepts of convergence and learning rate optimization in gradient descent:
    - Explanation of repeat until convergence in gradient descent.
    - Impact of learning rate on the convergence speed and efficiency.
    - Practical approach to determining the optimal learning rate during implementation.
    41:22 *📊 Batch Gradient Descent vs. Stochastic Gradient Descent*
    Comparison between batch gradient descent and stochastic gradient descent:
    - Description of batch gradient descent processing the entire training set in one batch.
    - Introduction to stochastic gradient descent processing one example at a time for parameter updates.
    - Illustration of how stochastic gradient descent takes a slightly noisy path towards convergence.
    47:22 *🏃 Stochastic Gradient Descent vs. Batch Gradient Descent*
    - Stochastic gradient descent is used more in practice with very large datasets.
    - Mini-batch gradient descent is another algorithm that can be used with datasets that are too large for batch gradient descent.
    - Stochastic gradient descent is often preferred due to its faster progress in large datasets.
    53:01 *📉 Derivation of the Normal Equation for Linear Regression*
    - The normal equation allows for the direct calculation of optimal parameter values in linear regression without an iterative algorithm.
    - Deriving the normal equation involves taking derivatives, setting them to zero, and solving for the optimal parameters theta.
    - Matrix derivatives and linear algebra notation play a crucial role in deriving the normal equation.
    57:52 *🧮 Matrix Derivatives and Trace Operator*
    - The trace operator allows for the sum of diagonal entries in a matrix.
    - Properties of the trace operator include the trace of a matrix being equal to the trace of its transpose.
    - Derivatives with respect to matrices can be computed using the trace operator for functions mapping to real numbers.
    01:12:49 *📈 Linear Regression Derivation Summary*
    - Deriving the gradient for the cost function J(Theta) involves taking the derivative of a quadratic function.
    01:15:19 *🧮 Deriving the Normal Equations*
    - Setting the derivative of J(Theta) to 0 leads to the normal equations X^T X Theta = X^T y.
    - Using matrix derivatives helps simplify the final equation for Theta.
    01:17:09 *🔍 Dealing with Non-Invertible X Matrix*
    - When X is non-invertible, it indicates redundant features or linear dependence.
    - The pseudo inverse can provide a solution in the case of linearly dependent features.

    • @hmm7780
      @hmm7780 Місяць тому +1

      Thanx Bro for this!!

  • @k-bobmakabaka4420
    @k-bobmakabaka4420 Рік тому +325

    when u paying 12k to your own university a year just so you can look up a course from a better school for free

    • @paulushimawan5196
      @paulushimawan5196 Рік тому +5

      University cost needs to be as low cost as possible.

    • @_night_spring_
      @_night_spring_ Рік тому +9

      while youtube have the unlimited free information and courses better than the tech university and colleges 🙂

    • @Call-me-Avi
      @Call-me-Avi 11 місяців тому

      Hahahahaahaha fucking hell thats what i am doing right fucking now.

    • @preyumkumar7404
      @preyumkumar7404 8 місяців тому

      which uni is that...

    • @k-bobmakabaka4420
      @k-bobmakabaka4420 8 місяців тому

      @@preyumkumar7404 University of Toronto

  • @manudasmd
    @manudasmd Рік тому +86

    Feels like sitting in stanford classroom from india ...Thanks stanford. you guys are best

    • @gurjotsingh3726
      @gurjotsingh3726 10 місяців тому +4

      for real bro, me sitting in panjab, would have never come across how the top uni profs are, this is surreal.

    • @hamirmahal
      @hamirmahal 6 місяців тому +2

      ​@@gurjotsingh3726 Sat sri akaal, ਖੁਸ਼ਕਿਸਮਤੀ

  • @calvin_713
    @calvin_713 11 місяців тому +55

    This course saves my life! The lecturer of the ML course I'm attending rn is just going thru those crazy math derivations preassuming that all the students have mastered it all before😂

    • @mahihoque4598
      @mahihoque4598 2 місяці тому

      My man was treating like these top % brains had forgotten simple partial differentiation and ours just don't even care😢

  • @adeelfarooq6319
    @adeelfarooq6319 7 днів тому +1

    Linear regression and gradient descent are introduced as the first in-depth learning algorithm. The video covers the hypothesis representation, cost function, and optimization using batch and stochastic gradient descent. The normal equation is also derived as an efficient way to fit linear models.
    Highlights:
    00:11 Linear regression is a fundamental learning algorithm in supervised learning, used to fit models like predicting house prices. The algorithm involves defining hypotheses, parameters, and training sets to make accurate predictions.
    -Supervised learning involves mapping inputs to outputs, like predicting house prices based on features. Linear regression is a simple yet powerful algorithm for this task.
    -In linear regression, hypotheses are defined as linear functions of input features. Parameters like theta are chosen by the learning algorithm to make accurate predictions.
    -Introducing multiple input features in linear regression expands the model's capabilities. Parameters like theta are adjusted to fit the data accurately.
    13:01 Linear regression involves choosing parameters Theta to minimize the squared difference between the hypothesis output and the actual values for training examples, achieved through a cost function J of Theta. Gradient descent is used to find the optimal Theta values for minimizing J of Theta.
    -Explanation of input features X and output Y in linear regression, highlighting the importance of terminology and notation in defining hypotheses.
    -Defining the cost function J of Theta in linear regression as the squared difference between predicted and actual values, leading to the minimization of this function to find optimal parameters.
    -Introduction to gradient descent as an algorithm used to minimize the cost function J of Theta and find the optimal parameters for linear regression.
    18:47 Gradient descent is a method used to minimize a function by iteratively adjusting parameters. It involves taking steps in the direction of steepest descent to reach a local optimum.
    -Visualization of gradient descent involves finding values for Theta to minimize J of Theta, representing a 3D vector in 2D space.
    -Gradient descent algorithm involves updating parameters Theta using the learning rate and the partial derivative of the cost function with respect to Theta.
    -Determining the learning rate in practice involves starting with a common value like 0.01 and adjusting based on feature scaling for optimal function minimization.
    27:26 Understanding the partial derivative in gradient descent is crucial for updating parameters efficiently. The algorithm iterates through training examples to find the global minimum of the cost function, adjusting Theta values accordingly.
    -Explanation of the partial derivative calculation in gradient descent and its importance in updating parameters effectively.
    -Expanding on the concept of gradient descent with multiple training examples and the iterative process of updating Theta values for convergence.
    -Illustration of how the cost function J of Theta behaves in linear regression models, showing a quadratic function without local optima, aiding in efficient parameter optimization.
    36:30 Gradient descent is a key algorithm in machine learning, adjusting parameters to minimize errors. It's crucial to choose the right learning rate to efficiently converge.
    -Visualizing gradient descent with data points and parameter adjustments helps understand the algorithm's progression.
    -Batch gradient descent processes the entire dataset at once, suitable for small datasets but inefficient for large ones due to extensive computations.
    -The limitations of batch gradient descent in handling big data sets due to the need for repeated scans, leading to slow convergence and high computational costs.
    44:58 Stochastic gradient descent updates parameters using one training example at a time, making faster progress on large datasets compared to batch gradient descent, which is slower but more stable.
    -Comparison of stochastic and batch gradient descent. Stochastic is faster on large datasets but doesn't converge, while batch is slower but more stable.
    -Mini-batch gradient descent. Using a subset of examples for faster convergence compared to one at a time in stochastic gradient descent.
    -Importance of decreasing learning rate. Reducing steps size in stochastic gradient descent for smoother convergence towards the global minimum.
    53:39 The normal equation provides a way to find the optimal parameters in linear regression in one step, leading to the global optimum without iterative algorithms. Linear algebra notation simplifies deriving the normal equation and matrix derivatives for efficient computation.
    -The normal equation streamlines finding optimal parameters in linear regression, bypassing iterative methods for quick convergence to the global optimum.
    -Utilizing matrix derivatives and linear algebra notation simplifies the derivation process, reducing complex computations to a few lines for efficiency.
    -Understanding matrix functions mapping to real numbers and computing derivatives with respect to matrices enhances algorithm derivation and optimization in machine learning.
    1:03:52 The video explains the concept of the trace of a matrix, its properties, and how it relates to derivatives in matrix calculus, providing examples and proofs. It also demonstrates how to express a cost function in matrix vector notation for machine learning optimization.
    -Properties of the trace of a matrix are discussed, including the fact that the trace of a matrix is equal to the trace of its transpose, and the cyclic permutation property of the trace of matrix products.
    -The video delves into the derivative properties of the trace operator in matrix calculus, showcasing how the derivative of a function involving the trace of a matrix can be computed and proven.
    -The concept of expressing a cost function in matrix vector notation for machine learning optimization is explained, demonstrating how to set up the design matrix and compute the cost function using matrix operations.
    1:15:15 The video explains the normal equations in linear regression, where the derivative is set to 0 to find the optimum Theta value using matrix derivatives, leading to X transpose X Theta equals X transpose y.
    -Explanation of the normal equations in linear regression and setting the derivative to 0 to find the optimal Theta value using matrix derivatives.
    -Addressing the scenario of X being non-invertible due to redundant features and the solution using the pseudo inverse for linearly dependent features.

  • @user-hm5qk8ic6j
    @user-hm5qk8ic6j Рік тому +45

    8:50 notations and symbols
    13:08 how to choose theta
    17:50 Gradient descent

    • @dens3254
      @dens3254 Рік тому +2

      52:50 Normal equations

  • @DagmawiAbate
    @DagmawiAbate Рік тому +27

    I am not good at math anymore, but I think math is simple if you get the right teachers like you. Tnks.

  • @LuisFuentes98
    @LuisFuentes98 Рік тому +54

    Hey can I point out how an amazing teacher professor Andrew is?!
    Also, I love how he is all excited about the lesson he is giving! It just makes me feel even more interested in the subject.
    Thanks for this awesome course!

    • @tanishsharma136
      @tanishsharma136 Рік тому +2

      Look at Coursera, he founded that and has many free courses.

  • @dimensionentangled4514
    @dimensionentangled4514 2 роки тому +73

    We define a cost function based on sum of squared errors. The job is minimise this cost function with respect to the parameters. First, we look at (Batch) gradient descent. Second, we look at Stochastic gradient descent, which does not give us the exact value at which the minima is achieved, however, it is much much more effective in dealing with big data. Third, we look at the normal equation. This equation directly gives us the value at which minima is achieved! Linear regression models is one of the few models in which such an equation exist.

    • @xxdxma6700
      @xxdxma6700 2 роки тому +12

      I wish you sat next to me in class 😂

    • @rajvaghasia9942
      @rajvaghasia9942 2 роки тому +1

      Bro who named that equation as normal equation?

    • @alessandroderossi8930
      @alessandroderossi8930 2 роки тому +6

      ​@@rajvaghasia9942 the name "normal equation" is because generalizes the concept of perpendiculum (normal to something means perpendicula to something). In fact "the normal equation" represent the projection between the straight line that i draw as a starting point (in the case of LINEAR regression) and the effective sampling data .This projection has , obviously , information about the distances between the real data (sampling data) and my "starting line"...hence to find the optimal curve that fit my data i 've to find weight a bias (in this video Theta0 , Theta1 and so on) to minimize this distance. you can minimize this distance using gradient descend (too much the cost), stochastic gradient descend (doing a set of partial derivative not computing all the gradient of loss function) or using the "normal equations"...uderstand?... Here an image from wikipedia to understand better (the green line are the famous distances) en.wikipedia.org/wiki/File:Linear_least_squares_example2.svg

    • @JDMathematicsAndDataScience
      @JDMathematicsAndDataScience 2 роки тому

      @@rajvaghasia9942 because we're in the matrix now bro! ha. For real though. It's about the projection matrix and the matrix representation/method of acquiring the beta coefficients.

    • @JDMathematicsAndDataScience
      @JDMathematicsAndDataScience 2 роки тому

      I have been wondering why we need such an algorithm when we could just derive the least squares estimators. Have you seen any research comparing the gradient descent method of selection of parameters with the typical method of deriving the least squares estimators of the coefficient parameters?

  • @imad1996
    @imad1996 Рік тому +8

    We learn, and teachers give us the information in a way that can help stimulate our learning abilities. So, we always appreciate our teachers and the facilities contributing to our development. Thank you.

  • @i183x4
    @i183x4 Рік тому +11

    8:50 notations and symbols
    13:08 how to choose theta
    17:50 Gradient descent
    8:42 - 14:42 - Terminologies completion
    51:00 - batch
    55:00 problem 1 set
    57:00 for p 0

    • @AshishRaj04
      @AshishRaj04 Рік тому

      notes are not available on the website ???

  • @Honey-sv3ek
    @Honey-sv3ek 2 роки тому +28

    I really don't have a clue about this stuff, but it's interesting and I can concentrate a lot better when I listen to this lecture so I like it

    • @FA-BCS-MUHAMMADHAMIDSAEEDUnkno
      @FA-BCS-MUHAMMADHAMIDSAEEDUnkno 2 роки тому +1

      You can see his lecture on coursera about Machine learning. You will surely get what he is saying in this video.

    • @paulushimawan5196
      @paulushimawan5196 Рік тому

      @@FA-BCS-MUHAMMADHAMIDSAEEDUnkno yes, that course is beginner-friendly. Everyone with basic high school math can take that course even without knowledge of calculus.

  • @anushka.narsima
    @anushka.narsima Рік тому +27

    Thank you so much Dr. Andrew! It took me some time but your stepwise explanation and notes have given me a proper understanding. I'm learning this to make a presentation for my university club. We all are very grateful!

    • @Amit_Kumar_Trivedi
      @Amit_Kumar_Trivedi Рік тому

      Hi I was not able to download the notes, 404 error, from the course page in description. Other PDFs are available on the course page. Are you enrolled or where did you download the notes from?

    • @anushka.narsima
      @anushka.narsima Рік тому +16

      @@Amit_Kumar_Trivedi cs229.stanford.edu/lectures-spring2022/main_notes.pdf

    • @georgenyagura7742
      @georgenyagura7742 Рік тому +1

      @@anushka.narsima thanks

  • @nikhithar3077
    @nikhithar3077 6 місяців тому +1

    39:38 we're subtracting because to minimize the cost function, the two vectors must be at 180⁰. So we get a negative from there.

  • @zzh315
    @zzh315 7 місяців тому +8

    "Wait, AI is just math?"
    "Always has been"

  • @jeroenoomen8145
    @jeroenoomen8145 8 місяців тому +4

    Thank you to Stanford and Andrew for a wonderful series of lectures!

  • @deepakbastola6302
    @deepakbastola6302 Місяць тому

    Dr. NG is always my best.. keep up motivating with such classes.

  • @parthjoshi5892
    @parthjoshi5892 Рік тому +3

    Would anyone please share the lecture notes? On clicking on the link for the pdf notes on the course website, its showing an error that the requested URL was not found on the server. It would really be great if someone could help me with finding the class notes.

    • @amaia7045
      @amaia7045 6 місяців тому

      I think i found them here : chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/cs229.stanford.edu/main_notes.pdf

  • @jaeen7665
    @jaeen7665 6 місяців тому

    One of the greats, a legend in AI & Machine Learning. Up there with Prof. Strang and Prof LeCun.

  • @olinabin2004
    @olinabin2004 Рік тому +1

    8:42 - 14:42 - Terminologies completion
    17:51 -- Checkpoint
    57:00 - run1

  • @ambushtunes
    @ambushtunes Рік тому +4

    Attending Stanford University from Nairobi, Kenya.

  • @ZDixon-io5ww
    @ZDixon-io5ww 2 роки тому +13

    47:00
    51:00 - batch
    55:00 problem 1 set
    57:00 for p 0

  • @HarshitSharma-YearBTechChemica
    @HarshitSharma-YearBTechChemica 7 місяців тому +2

    Does someone know how to get the lecture notes?
    They are not available on stanford's website.

  • @raymundovazquezmusic216
    @raymundovazquezmusic216 Рік тому +8

    Can you update the lecture notes and assignments in the website for the course? Most of the links to the documents are broken

    • @stanfordonline
      @stanfordonline  Рік тому +19

      Hi there, thanks for your comment and feedback. The course website may be helpful to you cs229.stanford.edu/ and the notes document docs.google.com/spreadsheets/d/12ua10iRYLtxTWi05jBSAxEMM_104nTr8S4nC2cmN9BQ/edit?usp=sharing

    • @adi29raj
      @adi29raj Рік тому +3

      @@stanfordonline Where can I access the problem sets?

    • @salonisingla1665
      @salonisingla1665 Рік тому +1

      @@stanfordonline Please post this in the description to every video. Having this in an obscure reply to a comment will only lead to people missing it while scrolling.

  • @ikramadjissa370
    @ikramadjissa370 2 роки тому +8

    Andrew Ng you are the best

  • @vseelix957
    @vseelix957 Рік тому +1

    my machine learning lecturer is so dogshit I thought this unit was impossible to understand. Now following these on study break before midsem and this guy is the best. I'd prefer that my uni just refers to these lectures rather than making their own

  • @claudiosaponaro4565
    @claudiosaponaro4565 Рік тому +2

    the best professor in the world.

  • @26d8
    @26d8 11 місяців тому +1

    The partial derivative was incomplete to me. we should take the derivative 2/2 thetha as well? is that term a constant?
    shouldn't we go with the product rule!

  • @diegoalias2935
    @diegoalias2935 Рік тому +19

    Really easy to understand. Thanks a lot for sharing!

    • @massimovarano407
      @massimovarano407 Рік тому

      sure it is, it is high school topic, at least in Italy

    • @gustavoramalho9454
      @gustavoramalho9454 Рік тому +10

      @@massimovarano407 I'm pretty sure multivariate calculus is not a high-school topic in Europe

  • @chhaysith
    @chhaysith Рік тому +1

    Dear Dr. Andrew I saw yours other video with the cost function with linear regression by 1/2m but this video 1/2, so what is different between it?(footnote 16:00)

    • @treqemad
      @treqemad Рік тому +1

      I don't really understand what you mean by 1/2m. However, from my understanding, the 1/2 is just for simplicity when taking the derivative of the cost ftn the power 2 will be multiplied to the equation and cancellyby the half.

    • @googgab
      @googgab Рік тому

      It should be 1/2m where m is the size of the data set. That's because we'd like to take the average sum of squared differences and not have the cost function depend on the size of the data set m
      ua-cam.com/video/ZzeDtSmrRoU/v-deo.html
      He explains it here at 6:30 minutes

    • @aman-qj5sx
      @aman-qj5sx Рік тому

      @@googgab It should be ok if J depends on m since m isn't changing?

    • @labiditasnim623
      @labiditasnim623 Рік тому

      same question

  • @learnfullstack
    @learnfullstack Рік тому +3

    if board is full, slide up the board, if it refuses to go up, pull it back down, erase and continue writing on it.

  • @michaelgreenhut180
    @michaelgreenhut180 Місяць тому

    So I come from game development, and I'm simulating a super simplified version of the batch gradient descent in Unity3D just for fun, so I can visualize it. One thing I'm noticing is that, for each X input, the algorithm seems to gradually make h(x) match the exact Y values. So if all the Y plots look like a zig zag, the h(x) plots will just mold over that zigzag and copy it exactly instead of forming a line through it. What am I doing wrong? Am I misunderstanding theta j?

    • @michaelgreenhut180
      @michaelgreenhut180 Місяць тому

      Oh, wait. I think I got it -- I was making two separate thetas FOR EACH input, when I should have only been using two thetas for the entire process. For some reason I thought each X vector had to have its *own* weight for house size and for #bedrooms.

  • @ozonewagle
    @ozonewagle Рік тому +1

    Why aren't we using the usual numerical methods(least squares) to fit a straight line to a given set of data points?

  • @user-up3fn9cw7h
    @user-up3fn9cw7h 5 місяців тому +1

    at 40:10, how about if we set the initial value at a point that the gradient is a negative direction, then we should increase theta rather than decrease theta?

    • @anikdas567
      @anikdas567 3 місяці тому +1

      even then we should decrease theta. Why? Reason: see the aim is to find a minima right? So if u start with a negative slope (aka gradient), u need to adjust the values of the parameters (theta) such that the slope approaches zero! (why? since the slope is zero at the minima). and if u see the graph of a quadratic equation, u will immediately understand the logic. it does not matter if u start with a pistive or negative slope. U just need to change theta so that finally ur gradient approaches zero. And for both of these cases we need to decrease the value of theta.

  • @clinkclink7814
    @clinkclink7814 Рік тому +5

    Very clear explanations. Extra points for sounding like Stewie Griffin

  • @shashankshekharjha6913
    @shashankshekharjha6913 2 місяці тому

    okay so the superscript i, ( 1 to m) represents the number of features, right? Because here m = 2 and I don't understand why m = # training examples

  • @atalantinopieva
    @atalantinopieva 2 місяці тому

    Hi, can a gentle soul explain to me why in the linear exampe of the house's price j=2 but in the visualization of the algorithm at 37:25 we have 4 iteration? should the number of iteration always be equal to the number of features?

    • @lyricalrohit
      @lyricalrohit 2 місяці тому

      Number of iterations has no relation with the number of parameters to be calculated. But iteration will depend on :-
      Is there any possibility to decrease J(Ø) by varying the parameters (which is signified by gradient of J(Ø) with respect to parameters at a given point) if yes then do next iteration if gradient is zero then stop iteration.

  • @ObaroJohnson-q8v
    @ObaroJohnson-q8v Місяць тому

    Formula looks like variance formulae , will be interested to know why we have that 1/2 of the variances of the lost of function. Could we just used the variance formula instead or is there a theory behind that. Thanks

  • @polymaththesolver5721
    @polymaththesolver5721 Рік тому +3

    Thank you Stanford for this amazing resource. Pls csn i get a link to the lecture notes. Thanks

  • @uekiarawari3054
    @uekiarawari3054 Рік тому +1

    difficult word :
    cost function
    gradient descent
    convex optimization
    hypothesis fx
    target
    j of theta = cost/loss function
    partial derivatives
    chain row
    global optimum
    batch gradient descent
    stochastic gradient descent
    mini batch gradient descent
    decreasing learning rate
    parameters oscillating
    iterative algorithm
    normal equation
    trace of a

  • @anikdas567
    @anikdas567 3 місяці тому

    i think he did a mistake when he defined the cost function at 16:17 (for "m" training examples). He just gave 1/2 as the constant, which works fine for 1 training example. But i felt a bit weird to use this for m training examples. Its like we are adding "m" quantities and dividing by 2? shouldn't it be like an average? I searched google and it showed the formuale for cost function. It showed 1/2m as the factor. which makes sense. The 2 is just a trick so that while differentiating it cancels with the power (which is 2). the 2 in the denominator can be adjusted by the learning factor (alpha). but missing the "m" in th denominator doesn't feel right. Can anyone please approve or disprove this??

    • @NehaGupta-xw2xg
      @NehaGupta-xw2xg 3 місяці тому +1

      Oh thanku so much you pointed out this I was having doubt in this

    • @lyricalrohit
      @lyricalrohit 2 місяці тому +1

      It doesn't matter if we introduce "m" in denominator or not. For a given dataset "m" is a constant value and the way of minimising variance which you mentioned is done by minimizing numerator only. The only contribution "m" and "2" will make is reduction of step size in each iteration which will make the computation longer.

  • @techpasya974
    @techpasya974 5 місяців тому +1

    Is the lecture note available publicly for this? I have been going watching this playlist and I think the lecture note will be very helpful.

    • @KorexeroK
      @KorexeroK 4 місяці тому

      cs229.stanford.edu/main_notes.pdf

  • @danilvinyukov2060
    @danilvinyukov2060 Місяць тому

    1:17:31
    Can't we just get rid of the x transverse on both left sides of the equation. As I remember from linear algebra if you have the same matrix on two sides of the equation from the same side that is redundant and can be removed.
    The result should be x(theta) =y => (theta) = x^(-1) y

  • @riajulchowdhury4218
    @riajulchowdhury4218 Місяць тому +1

    Where can I get the lecture notes? I can't access the files in the website.

  • @skillato9000
    @skillato9000 Рік тому +8

    1:01:06 Didn't know Darth Vader attended this lectures

  • @souravsengupta1311
    @souravsengupta1311 9 місяців тому +2

    cant download the course class note pls look onto ot

  • @tanmayshukla8660
    @tanmayshukla8660 Рік тому +1

    Why do we take the transpose of each row, wouldn't it be stacking columns on top of each other?

  • @user-rj5ws9ry1w
    @user-rj5ws9ry1w Рік тому +2

    The notes from the description seem to have vanished. Does anyone have them?

  • @anonymous-3720
    @anonymous-3720 Рік тому +3

    Which book is he using? and where do we find the homework?

  • @jerzytas
    @jerzytas 10 місяців тому

    In the very last equatin (Normal equation 1:18:06) Transpose(X) appears on both sides of the equation, can't this be simplified by dropping transpose(T)?

    • @manasvi-fl6xq
      @manasvi-fl6xq 7 місяців тому

      no because , x is neccesarily not a square a matrix

  • @HeisenbergHK
    @HeisenbergHK 7 місяців тому +1

    Where can I find the notes and other videos and any material related to this class!?

  • @Goaks8128
    @Goaks8128 2 роки тому +6

    Seems like the lagrangian or path of least action theory in physics can be applied to algorythmic manipulations in machine learning as well as economics where isoquant curves and marginal analysis depend on many variables...not being an expert in any field the topics seem very similar and some corelation may exist...perhaps already being used.

    • @godson200
      @godson200 Рік тому +1

      Do you speak english?

  • @truszko91
    @truszko91 Рік тому

    28:51, what is x0 and x1? If we have a single feature, say # of bedrooms, how can we have x0 and x1? Wouldn't x0 be just nothing? I'm confused. Or, in other words, if my Theta0 update function relies on x0 for the update, but x0 doesn't exist, theta0 will always be the initial theta0...

    • @MahakYadav12
      @MahakYadav12 Рік тому +1

      The value of x0 is always one 1. So theta0 can rely on x0 for the update. If we have single feature then h(X) =x0*Theta0 + x1* theta1 (which is ultimately equal to theta0 + x1*theta1 as x0=1, theta0 can also be referred as intercept and theta1 as slope if you compare it with the equation of a straight line such that price of house is linear function of # of bedrooms)

    • @truszko91
      @truszko91 Рік тому +1

      @@MahakYadav12 thank you!!

  • @chandarayi5673
    @chandarayi5673 Рік тому +1

    I love you Sir Andrew, you inspire me a lot haha

  • @fahyen6557
    @fahyen6557 Рік тому +8

    why do all the students sound like darth vader

  • @Suliyaa_Agri
    @Suliyaa_Agri 3 місяці тому

    Andrews Voice is Everything and that blue shirt of his

  • @wonggran9983
    @wonggran9983 2 роки тому +3

    Fred has a one hundred sided die. Fred rolls the dice,
    once and gets side i. Fred then rolls the dice, again,
    second roll, and gets side j where side j is not side i.
    What is the probability of this event e? Assume the
    one hundred sides of the one hundred sided die all have
    an equal probability of facing up.

    • @Tryingitoutletsee
      @Tryingitoutletsee 2 роки тому +1

      1 - (1/10000) = 9999/10000

    • @ahmettolgakarabulut9380
      @ahmettolgakarabulut9380 Рік тому

      the probability of getting the same results for two rolls and they are both defined is 1/10000. So that we will subtract that from 1

    • @billr5842
      @billr5842 Рік тому

      Wouldn't it be 99/100? The first roll can be any number so it doesn't really matter what's there. The second roll just needs to be one of the other 99 numbers. The first roll doesn't really change the probability. Of course, I barely know any math so I'm no expert lol

    • @emirkisa
      @emirkisa Рік тому

      @@billr5842 you're right, the probability calculated above as 1/10000 is the probability of getting the same result for a "specific side", like getting "side 3" twice. But there are 100 different sides that has the 1/10000 probability to occur twice, so the probability 1/10000 is multiplied by the different side number 100 which makes the probability of getting the same result for two rolls equal to 1/100. Then 1 - 1/100 = 99/100

  • @Z_nix
    @Z_nix 7 місяців тому +4

    How come everybody could understand this? The first lecture was very good. But this one... I couldn't understand a single concept. I have watched half of the video and I don't know what is the meaning of Linear Regression. Professor Andrew Ng. was writing all those equations and I was questioning like what's the meaning of this? Why are we doing this? What's the use of this? I opened comment section and everybody is appreciating Professor's teaching skills and then there's me who couldn't unserstand anything in this video. Where am I going wrong? Why am I different than others?

    • @dimilands
      @dimilands 7 місяців тому +2

      i dont know if i get your question about linear regression, but it is a way to finding a formula that would fit our data most accurately , so that our hypothesis is satisfied (i think)

    • @Z_nix
      @Z_nix 7 місяців тому +1

      @@dimilands now I m learning Machine Learning from somebody else and he is teaching very very good. Andrew just sucks.

    • @dimilands
      @dimilands 7 місяців тому

      may i ask who is your new 'source' ? @@Z_nix

    • @ponugotimanojkumar
      @ponugotimanojkumar 3 місяці тому

      ​​@@Z_nixwhat's your source,I can't understand this lecture neither

    • @Z_nix
      @Z_nix 3 місяці тому

      @@ponugotimanojkumar I'm learning it from UA-cam channel 'Campus X'.

  • @promariddhidas6895
    @promariddhidas6895 4 місяці тому +1

    i wish i had access to the problem sets for this course

  • @labiditasnim623
    @labiditasnim623 Рік тому +1

    why in cost function he did 1/2 and not 1/2*m ?

  • @sipraneye70
    @sipraneye70 Рік тому +3

    Where do i get the assignments for these lecture series?

  • @PhilosophyOfWinners
    @PhilosophyOfWinners 10 місяців тому +2

    Loving the lectures!!

  • @ChidinmaOnyeri
    @ChidinmaOnyeri 3 місяці тому

    Hi. Can anyone recommend any textbook that can help in further study of this course.
    Thank you

  • @mortyrickerson6322
    @mortyrickerson6322 Рік тому +2

    Fantastic. Thank you deeply for sharing

  • @bhavyasharma9784
    @bhavyasharma9784 11 місяців тому

    The pdf link to the problem set says Error Not found. Can someone help Please ?

  • @Nobody2310
    @Nobody2310 5 місяців тому

    has someone(possibly newbie like me) gone through all the videos and learnt enough to pursue an ML career or created a project? Wondering if a paid class should be taken or these free videos are enough.

    • @orignalbox
      @orignalbox 4 місяці тому

      i also want to know have you gone through all the videos

  • @Gatsbi
    @Gatsbi 5 місяців тому

    Had to study basic Calculus and Linear algebra at the same time to understand a bit, but don't get it fully yet,

  • @ajayjoel
    @ajayjoel Місяць тому

    Where can we find the lecture notes for this course?

  • @samsondawit
    @samsondawit Рік тому +1

    why is it that the cost function has the constant 1/2 before the summation and not 1/2m?

    • @ihebbibani7122
      @ihebbibani7122 Рік тому +3

      I think it's because he is taking one learning example and not m learning examples

    • @samsondawit
      @samsondawit Рік тому

      @@ihebbibani7122 ah I see

  • @jpgunman0708
    @jpgunman0708 Рік тому +1

    thanks a lot 吴恩达,i learned a lot

  • @AmanSainiIITIAN
    @AmanSainiIITIAN 2 дні тому

    where can i find lecture notes???

  • @samrendranath
    @samrendranath Рік тому

    how to access the lecture notes:(. they have been removed from standford websites.

  • @Baru_Bangun_Tidur
    @Baru_Bangun_Tidur Рік тому

    1.14.54 my answer is (X^T Xθ )+(X^T θ^T X)-(X^T Y)-(Y X^T) its same or my ans is wrong ?

  • @thefourhourtalk
    @thefourhourtalk Рік тому

    I didn't understand the linear regression algorithm is there any way to understand it better ??

  • @gauravpadole1035
    @gauravpadole1035 Рік тому +1

    can anyone pls explain what do we mean by "parameters" that is denoted by theta here?

    • @SteveVon7
      @SteveVon7 9 місяців тому +1

      Parameters are TRAINABLE numbers in the model such as weights and bias's, since the prediction of the model is based on some combination of weight and bias values. So when 'parameters' of 'theta' are changed or 'trained', it means that the weights and bias's are changed or trained.

  • @putinscat1208
    @putinscat1208 Рік тому

    I asked ChatGPT how to learn machine learning. #1 Coursera:
    Course: "Machine Learning" by Andrew Ng (Stanford University)

  • @RHCIPHER
    @RHCIPHER Рік тому +1

    this men is great teatcher

  • @user-ys7hm5ci5r
    @user-ys7hm5ci5r Місяць тому

    How to study applications ? This I only upto theory is it??

  • @cristianreyes8288
    @cristianreyes8288 Рік тому

    anybody know where the notes are? the link doesnt work for me

  • @wishIKnewHowToLove
    @wishIKnewHowToLove Рік тому +1

    it's hard, but everything thats worth doing is

  • @GameFlife
    @GameFlife 11 місяців тому +1

    I need that lecture notes ASAP professor

  • @ahmednesartahsinchoudhury2628
    @ahmednesartahsinchoudhury2628 8 місяців тому

    Does anyone know which textbook goes well with these lectures?

  • @lyndonyang1269
    @lyndonyang1269 2 місяці тому

    anyone knows where to access the homework assignments as practice?

  • @chideraagbasiere7868
    @chideraagbasiere7868 Рік тому

    May I ask, down to 7:50 what does O (teta) represent?

  • @veeraboinabhavaniprasad3864
    @veeraboinabhavaniprasad3864 10 місяців тому

    Could you please tell me the actual use of Gradient Descent by minimizing the y(theta)?

    • @AdeelKhalidSE
      @AdeelKhalidSE 9 місяців тому

      Gradient Descent is basically the optimization model that help minimizing the cost of Model. We obtain the cost by calculating the MSE (Mean Squared Error)

  • @puspjoc9975
    @puspjoc9975 3 місяці тому

    where can i get the full detail notes?? Anyone who knows this ,reply please.

  • @faisalhussain4022
    @faisalhussain4022 9 місяців тому +1

    Wondering if lecture notes are also available to download from somewhere ?

    • @williambrace6885
      @williambrace6885 8 місяців тому +4

      hey bro I found them: cs229.stanford.edu/lectures-spring2022/main_notes.pdf

    • @kag46
      @kag46 7 місяців тому

      @@williambrace6885thanks a lot!

  • @aliiq6572
    @aliiq6572 Рік тому +1

    Can I get notes for these lectures?

  • @johndubchak
    @johndubchak Рік тому

    Andrew Ng, FTW!

  • @blabla-wc5ke
    @blabla-wc5ke 3 місяці тому

    Are these youtube lectures same as those 3 modules uploaded on coursera website?

    • @VenchislavCodes
      @VenchislavCodes 3 місяці тому

      Not really. I think this one dives deeper into the topic, while Coursera one is easy beginner-friendly introduction.
      For example Support Vector Machine Topic is not discussed on coursera.

  • @R9000S
    @R9000S 7 днів тому

    The voice at 50:38, how is that possible?

  • @victor3btn598
    @victor3btn598 Рік тому +2

    Simple and understandable

  • @wonggran9983
    @wonggran9983 2 роки тому +4

    Ask ur undergrads this question on final exam:
    100 side dice. First roll side i. Second to twentieth roll not i. Twentyfirst roll number i. Call this event E. What is the probability of E occuring? Assume 100 side die is the uniform distribution of numbers 1, 2, 3, ..., 100.

    • @aniketsaha9106
      @aniketsaha9106 2 роки тому +1

      can you elaborate the question

    • @osere6432
      @osere6432 2 роки тому +1

      This is ambiguous, if you specify an "i" beforehand, you are looking at:
      P(i on 1st roll) * P(not i on 19 rolls) * P(i on 21st roll)
      0.01 * 0.99^19 * 0.01 = 0.00008261686238
      if the i is specified on the first roll:
      P(not i on 19 rolls) * P(i on 21st roll)
      0.99^19 * 0.01 = 0.008261686238

  • @surendranmurugesan
    @surendranmurugesan 8 місяців тому

    is the explanation at 40:00 correct?

  • @DrPan88
    @DrPan88 Рік тому

    Wonder: Is m equals to n+1 ?n stands for number of inputs, while the m stands for the number of the rows which includes X0 in addition.

    • @sandeeproy6564
      @sandeeproy6564 Рік тому +3

      n actually stands for the number of attributes here, or the number of features (columns)

    • @MaxTheKing289
      @MaxTheKing289 Рік тому +1

      No not necessarly m is the number of rows, and n is the number of column or features. In his example n is equal to two (Size, and bedrooms), m can be any number. But i think that in the example m is 50

    • @DrPan88
      @DrPan88 Рік тому +1

      @Louis Aballea yeah I got it. Thanks !

  • @MikeSieko17
    @MikeSieko17 9 місяців тому

    why even go to uni, wtf this is so much better than my lectures and it's free and it's recorded lmao wtf unis be doing they are dying fr

  • @mikeeggleston1769
    @mikeeggleston1769 Рік тому +1

    Very clear, but what I don't get is for the multiple data sets when I sum the errors, do I do two passes through the data and choose the error that is less?

    • @victor3btn598
      @victor3btn598 Рік тому +1

      Just continue changing theta till cost function reduces to optimal

    • @victor3btn598
      @victor3btn598 Рік тому +1

      Yes the goal Is to reach less error and by tweaking theta you can achieve that and make sure you don't overshoot

  • @Nevermind1000
    @Nevermind1000 4 місяці тому

    Anyone know where to get the lecture notes for the lecture

    • @fordownload9611
      @fordownload9611 3 місяці тому +1

      just search Stanford machine learning notes, one of the first result will be of pdf from the website cs229