Machine Learning 3.2 - Linear Discriminant Analysis (LDA) and Quadratic Discriminant Analysis (QDA)

Поділитися
Вставка
  • Опубліковано 5 жов 2024

КОМЕНТАРІ • 29

  • @lizzy1138
    @lizzy1138 3 роки тому +4

    Thanks for this! I needed to clarify these methods in particular, was reading about them in ISLR

  • @Spiegeldondi
    @Spiegeldondi Рік тому +1

    A very good and concise explanation, even starting with the explanation of likelihood. Very well done!

  • @JappieYow
    @JappieYow 3 роки тому +7

    Interesting and clear explanation! Thank you very much, this will help me in writing my thesis!

  • @ofal4535
    @ofal4535 2 роки тому +2

    i was trying to read it my self but you made it so much simpler

  • @gingerderidder8665
    @gingerderidder8665 4 місяці тому

    This beats my MIT lecture. WIll be coming back for more!

  • @Sam1998Here
    @Sam1998Here 2 місяці тому

    Thank you for your explanation. I also think at 8:15 the multivariate normal distribution's probability density function should have $\sqrt{|\Sigma|}$ in the denominator (rather than $|\Sigma|$ as you have currently) and it also may be helpful to viewers to let them know that $p$ represents the dimension of the space we are considering

  • @neftalisalazar2352
    @neftalisalazar2352 7 місяців тому +1

    I enjoyed watching your video, thank you. I will watch more of your videos on machine learning videos thank you!

  • @Dhdhhhjjjssuxhe
    @Dhdhhhjjjssuxhe Рік тому +2

    Good job. It is very easy to follow and understand

  • @huilinchang8027
    @huilinchang8027 4 роки тому +8

    Awesome lecture, thank you professor!

  • @spencerantoniomarlen-starr3069

    10:48 ohhhhh, I was just going back and forth between the sections on LDA and QDA in three different textbooks (An Introduction to Statistical Learning, Applied Predictive Analytics, and Elements of Statistical Learning) for well over an hour and that multivariate normal pdf was really throwing me off big time. Mostly because of the capital sigma to the negative 1st power term, I didn't realize it was literally a capital sigma, I kept thinking it was a summation of something!

  • @vi5hnupradeep
    @vi5hnupradeep 3 роки тому +4

    Thankyou so much ! Cleared a lot of my doubts

  • @geo123473
    @geo123473 11 місяців тому +1

    Very great video! Thank you professor!! :)

  • @黃楷翔-h8j
    @黃楷翔-h8j 2 роки тому +2

    Very useful information, thanks you professor!

    • @billbasener8784
      @billbasener8784  2 роки тому

      I am glad its helpful! Thanks for the kind words.

  • @zhengcao6529
    @zhengcao6529 3 роки тому +1

    You are so great. Keep up please.

  • @mikeolinblare913
    @mikeolinblare913 15 годин тому

    948 Benny Glen

  • @MrRynRules
    @MrRynRules 3 роки тому +1

    Thank you sir, well explained.

  • @jaafarelouakhchachi6170
    @jaafarelouakhchachi6170 6 місяців тому +1

    can you share these slides in the videos with me?

  • @pol4624
    @pol4624 3 роки тому

    very good video, thank you professor

    • @billbasener8784
      @billbasener8784  3 роки тому

      I am glad it is helpful. Thank you for the kind words!

  • @kaym2332
    @kaym2332 3 роки тому +1

    Hi! If the classes are assumed to be normally distributed, does that subsume that the features making up an observations are normally distributed as well?

    • @billbasener8784
      @billbasener8784  3 роки тому +1

      Yes. If the each class has a multivariate normal distribution than each individual feature variable ihas a single variable normal distribution.

  • @saunokchakrabarty8384
    @saunokchakrabarty8384 Рік тому

    How do you get the values of 0.15 and 0.02? I'm getting different values.

    • @rmharp
      @rmharp Рік тому

      Agreed. I got approximately 0.18 and 0.003, respectively.

  • @haitaoxu3468
    @haitaoxu3468 3 роки тому

    could you share the slide?

  • @CottonChristian-e3r
    @CottonChristian-e3r 20 днів тому

    Young Carol Harris Ruth Clark Jessica