What is Least Squares Estimation?

Поділитися
Вставка
  • Опубліковано 1 жов 2024
  • Explains Least Squares (LS) Estimation with two examples: 1. line-fitting a data set, and 2. digital communications. Derives the LS equation and shows how it can be viewed as a pseudo inverse.
    Related videos: (see iaincollings.com)
    • What is Fisher Information? • What is Fisher Informa...
    • What is an Adaptive Step Size in Parameter Estimation? • What is an Adaptive St...
    • What is the Kalman Filter? • What is the Kalman Fil...
    • How are Matched Filter (MF), Zero Forcing (ZF), and MMSE Related? • How are Matched Filter...
    • MIMO Communications • MIMO Communications
    • What is Intersymbol Interference ISI? • What is Intersymbol In...
    • Signal Model for MIMO and CDMA • Signal Model for MIMO ...
    • What is a Decision Feedback Equalizer (DFE)? • What is a Decision Fee...
    For a full list of Videos and Summary Sheets, goto: iaincollings.com

КОМЕНТАРІ • 56

  • @alithedazzling
    @alithedazzling 2 роки тому +4

    What I love about this channel is the consistency. Every week there's a new good video. Keep it up Iain!

    • @iain_explains
      @iain_explains  2 роки тому +2

      Thanks. I'm glad you're liking the videos each week. I'm enjoying making them.

  • @ashwith
    @ashwith 2 роки тому +3

    Have you considered offering a full fleged communication systems MOOC on edX or Coursera, etc? I know of only one offered on communication systems (on edX) so far and I don't think it has been offered again for a long time. You're a really good teacher!

    • @iain_explains
      @iain_explains  2 роки тому +2

      Thanks for the suggestion. It's occurred to me, but I haven't looked into it. I might give it some thought. I'm glad you like the videos.

  • @ghassankh4016
    @ghassankh4016 Рік тому

    Hello, I am Muhammad.
    I am studying a master’s degree in communications engineering, specializing in radio and mobile communications systems. Can you provide me with suggested titles for the master’s thesis !
    With best wishes

  • @fahad_hassan_92
    @fahad_hassan_92 Рік тому +1

    Will the sum of residuals for a best fit line for the given data equal zero?

    • @iain_explains
      @iain_explains  Рік тому +1

      Great question! I've never actually thought about that before, but I'm sure it's true. I can't think of how to prove it right now, but I tried running some examples in Matlab, and they all give the sum of residuals in the order of 10^(-14), so that's as close to zero as numerical accuracy goes as far as I'm concerned.

    • @fahad_hassan_92
      @fahad_hassan_92 Рік тому +1

      @@iain_explains I see, thanks!

  • @eduardoortega399
    @eduardoortega399 2 роки тому +1

    Thank you for the Least Squares Estimation explanation in this video. I need to work with Adaptative Digital Beamforming systems and this video will help me a lot!. Can I suggest if you could please share information in a video about the IQ modulation/demodulation ? Thanks!

    • @iain_explains
      @iain_explains  2 роки тому

      Thanks for the suggestion. I've got it on my "to do" list. In the meantime you might like to watch these videos that are on the topic: "What is a Constellation Diagram?" ua-cam.com/video/kfJeL4LQ43s/v-deo.html and "Is the Imaginary Part of QAM Real?" ua-cam.com/video/6asDtzaVjbQ/v-deo.html

  • @pratiksharma5663
    @pratiksharma5663 28 днів тому

    Hey I see some weighted least square problems are solved through iterations…
    Please explain what is the condition when we need to solve it iteratively…?

    • @iain_explains
      @iain_explains  28 днів тому

      There are lots of different versions and related algorithms that involve iterations, so there's not one single answer to your question. However, one important aspect is that it is computationally expensive to calculate inverses of large matrices (eg. (H^TH)^(-1) in this case), so sometimes it is a good idea to formulate the problem as "repeated smaller experiments", each with fewer measurements (shorter "y" vector), and then iterate.

    • @pratiksharma5663
      @pratiksharma5663 24 дні тому +1

      Thank you

  • @pepsisherbert
    @pepsisherbert 2 роки тому +1

    Hi Iain, just wanted to say that your teaching is exceptional - so calm, clear and concise. Thank you so much for the effort you put in, which I am sure is not insignificant. It is massively appreciated!

    • @iain_explains
      @iain_explains  2 роки тому

      Thanks Betsy, that's so great to hear. I'm really glad you like the style and content of my videos.

  • @AtharvaSathaye
    @AtharvaSathaye Рік тому

    Can you help me in non-linear least squares estimator for trajectory estimation of a vehicle?

  • @wojciechzajaczkowski9080
    @wojciechzajaczkowski9080 5 місяців тому

    Is that possible to get rid of the noise if number of the equations equals to the number of variables? Seems analogues to the situation where we have just two points and would like to estimate a function across them - no chanse to eliminate residual?

    • @iain_explains
      @iain_explains  5 місяців тому +1

      It's not getting rid of the noise, it's estimating the parameters in the presence of noise.

    • @wojciechzajaczkowski9080
      @wojciechzajaczkowski9080 5 місяців тому

      Of course it's just estimation, so the niose is always there, but i wonder if this method is able to minimize the noise if the H is squared MxM matrix ?

  • @samuelespadoni8874
    @samuelespadoni8874 7 місяців тому

    Love you! You solved tons of doubts. What you’re explaining it’s highly correlated with my university course of Satellite Navigation.

  • @stellatauer761
    @stellatauer761 2 роки тому

    Why do one always take the square? Why cannot we maximize the simple term (argminI(y-Hx)I)?

    • @iain_explains
      @iain_explains  2 роки тому

      It's easier to calculate the derivative of the square function to find the optimal points, compared to the modulus function.

  • @pitmaler4439
    @pitmaler4439 2 роки тому

    In the LMS alg. is there always a straight line the model (y=beta1*x + beta2), or can that also be another graph (e.g. parabel)?
    Thank You.

    • @iain_explains
      @iain_explains  2 роки тому +1

      You can estimate higher order lines/curves, yes. For example, considering the model on the top right hand side of the page, you could add a column to the D matrix with elements d_i^2, and extend the beta vector by one (to include beta_2). I've already included the code for this in the accompanying file. See www.iaincollings.com/probability-and-random-variables#h.5vsqt9fvre40 (under the heading "Estimation and Hypothesis Testing").

  • @jasminnadic2103
    @jasminnadic2103 2 роки тому

    Your videos are brilliant, thank you. In this example, it is clear what the data measurements could be (e.g. temperature). But, with respect to a MIMO system where you want adjust the antenna elements, what is there the meaaurement? Perhaps you can take the meaaurement from the received signal and then calculate the optimal weights for the last step. Perhaps these calculated weights could be the measurements.

    • @iain_explains
      @iain_explains  2 роки тому

      It all depends on the equation you're dealing with. Often in communication systems there is a training period, where training data is sent which is known/expected at the receiver. Then the input is known, and the received signal is known (measured), and the unknown variables are the channel path gains to each antenna element. See: "Channel Estimation for Mobile Communications" ua-cam.com/video/ZsLh01nlRzY/v-deo.html

  • @onionknight1914
    @onionknight1914 2 роки тому

    谢谢!My course design is adaptive algorithms for beamforming and this video may help

    • @iain_explains
      @iain_explains  2 роки тому

      Great. I hope it helped. Let me know if there are other specific topics you'd like me to cover.

  • @ethancooper4154
    @ethancooper4154 Рік тому

    Pulled this up as I'm learning about adaptive filters. It's almost a weekly occurrence, learning a new topic, struggling to wrap my head around it, wondering "Hm, has Dr. Collings covered this?" and finding the answer is yes, of course he has

    • @iain_explains
      @iain_explains  Рік тому +1

      That's great to hear. I'm so glad you're finding the videos helpful.

    • @iain_explains
      @iain_explains  Рік тому

      These videos might also be helpful for your current topic:
      - "What is an Adaptive Step Size in Parameter Estimation?" ua-cam.com/video/Nwm1cngRta8/v-deo.html
      - "What is the Kalman Filter?" ua-cam.com/video/OiUS2926nQM/v-deo.html
      - "How does a Radar Track Manoeuvring Targets?" ua-cam.com/video/ibvlKTGQ4zQ/v-deo.html

  • @ladegaardmads
    @ladegaardmads 2 роки тому

    Nice video, i have one question. What does the "arg" stands for in the formula: x^=arg min Sigma "error^2"..... ?

    • @iain_explains
      @iain_explains  2 роки тому +1

      It means the argument (ie. the variable value) that minimises the function. In other words, it's not the minimum value of the function, but it is the value of x for which the function is a minimum.

    • @ladegaardmads
      @ladegaardmads 2 роки тому +1

      @@iain_explains Thansk, that made it very clear to me :)

  • @emadibnalyaman8073
    @emadibnalyaman8073 2 роки тому

    Hi sir, can you please make a video to teach us how the EM waves are working, who they are traveling to, what the RF signal looks like, and which way we can imagine it and its modulation? I really cannot imagine the whole process. Until now, I am saving without understanding the entire process between the sender and the receiver. Thank you very much! I really appreciate your working .

    • @iain_explains
      @iain_explains  2 роки тому +1

      Thanks for the suggestion. I've added it to my "to do" list.

  • @DZW-sx1lq
    @DZW-sx1lq Рік тому

    Very Good video! Could you talk more on the OFTS, that is very interesting

    • @iain_explains
      @iain_explains  Рік тому

      I'm assuming you've seen my OTFS video already? ua-cam.com/video/MvK3zhPrGkk/v-deo.html
      It's ongoing research, but I'll add it to my "to do" list to do a follow up video at some point.

  • @KundanKumar-ul1sw
    @KundanKumar-ul1sw Рік тому

    Thank you for the lecture, it helped me a lot to understand LS estimation.

  • @wenliu7875
    @wenliu7875 2 роки тому

    I like your videos in general, but in this case, seems like you can get the same formula at the end without going through any the calculations in the middle. x = y/H is the same as x = (tran(H)*y) / (tran(H)*H). What am I missing?

    • @iain_explains
      @iain_explains  2 роки тому

      Your formula doesn't work when the matrix H is not invertible. And it has numerical problems when H is "close" to not being invertible. In contrast, H'H is always invertible.

    • @wenliu7875
      @wenliu7875 2 роки тому

      @@iain_explains Thanks for answering my question. Is it correct that equation -2y'H+2x'H'H = 0 and x = y/H achive the same goal but the second one is not always solvable?

    • @iain_explains
      @iain_explains  2 роки тому

      Well, first of all x=y/H doesn't make sense because H is a matrix. I guess you mean x=inv(H)y , and if so, then if you substitute that into the first equation, you will see that you'll get 0 = 0 , so yes, they achieve the same when H is invertible.

    • @wenliu7875
      @wenliu7875 2 роки тому +1

      @@iain_explains Thank you.

  • @biebz4lyfe4eva
    @biebz4lyfe4eva 2 роки тому

    Can’t believe I get this awesome teacher for free (with ads)!

  • @disissartaj
    @disissartaj Рік тому

    Thanks for the derivation of pseudo inverse.

  • @aghilvinayak7458
    @aghilvinayak7458 2 роки тому

    Thanks Iain for this detailed explanation.

  • @Amine1z
    @Amine1z 2 роки тому

    Pseudo inverse of Moore penrose

  • @roiles1
    @roiles1 2 роки тому

    You are Amazing!

    • @iain_explains
      @iain_explains  2 роки тому

      Thanks so much for your nice comment. I'm glad you like the videos.