Probability for Machine Learning!

Поділитися
Вставка
  • Опубліковано 16 жов 2024

КОМЕНТАРІ • 19

  • @NicholasRenotte
    @NicholasRenotte Рік тому +2

    I just binged watched the whole Probability Theory playlist this morning! Smashed it Ajay!

    • @CodeEmporium
      @CodeEmporium  Рік тому +2

      Bwahaha thanks Nick !

    • @NicholasRenotte
      @NicholasRenotte Рік тому

      @@CodeEmporium anytime man, you’re so fluid with your explanations. Inspiring as hell!!

  • @devharal6541
    @devharal6541 Рік тому

    Your videos are so accurate and intution behind learning via connecting concepts to machine learning is just awesome.

  • @chinmayeejoshi2119
    @chinmayeejoshi2119 2 роки тому +1

    This is learning series has been excellent. Danke!

    • @CodeEmporium
      @CodeEmporium  2 роки тому

      Thanks so much for watching 🎉 :)

  • @CodeEmporium
    @CodeEmporium  2 роки тому +1

    Please checkout the accompanying blogpost in the description below. For more information on each topic discussed in the video (Random Variables, Probability mass / density functions), please refer to the "Probability Theory for Machine Learning" playlist.
    Video Correction #1: Prices are dependent random variables that depend on number of bedrooms, age and sqft. So from 14:06 onwards, we should see the conditional distribution also depend on the X_ij terms. That said, the overall derivation should remain the same. Hope this helps!

  • @badriveera8941
    @badriveera8941 7 місяців тому +2

    Great set of videos. One subtle point of clarification. If fy(yi) is a probability density function, then the value of fy(yi) for a particular house price would be zero since it is a continuous variable. How do you reconcile that? Appreciate your thoughts on this.

  • @carsten011640
    @carsten011640 Рік тому +1

    Hi, you're videos in this series have been so useful for my understanding, thank you!
    Could I clarify something please? At 20:00, you say "in reality, all these PDFs can be assumed to be the same...practically meaning that probability that house #1 is $700-800 is the same for house #2 too, and all other houses".
    I'm wondering whether this correct, my understanding: the PDFs are the same for every X value (gaussian) but they centre around a new mean for every value X value too. House number 1's X values mean that it will have a certain probability of being $700-800 according to the linear equation's y^ estimate at that X value. And house number 2's X value would follow that it will have a different probability of being $700-800 according the y^ estimate at that X value. Is this a correct interpretation?
    Again, thank you so much for this series.

    • @jaxejaxejaxe
      @jaxejaxejaxe 10 місяців тому +1

      This assumption is a very high-level assumption and we don't need to talk about any estimators to understand it. You should think of this assumption as coming before any math is done at all.
      Generally:
      We try to find the best hypothesis/ML-model/prediction-rule from our sample data that can predict well on new data. Therefore, we assume that our sample data has been "given to us" from some unknown distribution. We don't know this distribution, but we have to assume that all the data come from this _same_, unknown distribution.
      In this example, it means that everytime a house-price is "sampled" it comes from this unknown distribution putting out prices on the houses. This means that for any houses x and z, the probability of them costing any number (700-800k for instance) is the same.

  • @keren718
    @keren718 Рік тому

    I love your series. I wonder why there is (-1)x in the last derivation

    • @CodeEmporium
      @CodeEmporium  Рік тому +1

      Thanks so much. The last term (if derived) will become the squared loss. And this loss needs to be minimized. It’s the same as maximizing the (-1) * the same value

  • @theforthdoctor7872
    @theforthdoctor7872 2 роки тому

    You forgot to mention the "Bambleweeny 57 Sub-Meson Brain" and the "atomic vector plotter".😉

  • @mustafizurrahman5699
    @mustafizurrahman5699 2 роки тому

    Excellent

  • @badermuteb4552
    @badermuteb4552 2 роки тому +1

    argmax same as max???

    • @CodeEmporium
      @CodeEmporium  2 роки тому +5

      They are not. Max will return “what is the maximum value of this function”. But arg max is “what is the value of the parameters such that the function is maximized”

    • @vtrandal
      @vtrandal Рік тому

      @@CodeEmporium You are very very good with this subject matter. Thank you for making these great videos!