Gaussian Processes : Data Science Concepts

Поділитися
Вставка
  • Опубліковано 27 гру 2024

КОМЕНТАРІ • 41

  • @asjsingh
    @asjsingh 6 місяців тому +19

    My word! You are a fantastic communicator.

    • @ritvikmath
      @ritvikmath  6 місяців тому +2

      Really appreciate that!

  • @VictorAlmeida27
    @VictorAlmeida27 3 місяці тому +1

    Very good explanation, you made it a lot simpler than my teachers ever could. My undergraduate thesis was in gaussian processes, so it was pretty nostalgic seen you dive into this topic.
    A note I'd like to make is: the choice of the μ prior is very very important depending on the distance and number of data points you have; your model may be very dependent on it. It makes sense to set it at zero to develop the intuition behind it, but as you try to apply it, you see that the model may just tend to zero if your data is too further a part, or if you make a poor choice of L or don't enough data.
    So, to get the orange dashed line in the video, you'd also need to have a regression on the data to have your μ prior. But the problem is that you are adding more uncertainty to the model, since you are assuming that the mean for you distribution lies in that linear relation.
    But as you said it, it's great to have a distribution estimate and not only rely on point estimate models; this is a great alternative.

  • @vzinko
    @vzinko 6 місяців тому +3

    Happy to see you back here making great videos as always!

  • @dorothyduan9007
    @dorothyduan9007 День тому

    Love it! Makes so much more sense to me now 😊

  • @hesterklomp5266
    @hesterklomp5266 6 місяців тому +2

    This video couldn't come at a better time, I have a statistical learning exam next week. Thank you so much!!!

  • @undertaker7523
    @undertaker7523 6 місяців тому +9

    I remember asking for this a while back. Thank you!!!

  • @ghifariadamfaza3964
    @ghifariadamfaza3964 6 місяців тому +5

    Glad you cover this topic!

    • @ritvikmath
      @ritvikmath  6 місяців тому

      Hope you enjoyed it!

  • @paull923
    @paull923 6 місяців тому +3

    thank you very much, you make even the hardest topics understandable and fun to watch! could you delve a little bit deeper the mathematical steps of marginalization and conditional probability that you are talking about between 15:00 and 18:00?

  • @manumaminta6131
    @manumaminta6131 3 місяці тому

    This is elegantly explained!

  • @GeoffryGifari
    @GeoffryGifari 6 місяців тому +1

    If we try to predict the mean for the unknown points in between the data we have, would the mean always follow a straight line (ex: 0:45 one straight line, 24:05 two lines between 3 data points)?

    • @ritvikmath
      @ritvikmath  6 місяців тому +1

      Definitely not! That’s a great question; I drew the straight lines out of simplicity and if you work out the math, the straight line would imply a mean of 13.75 for x=30 but as we see on the second page we actually got a mean of 13.9 there. The shape of the means curve will likely be nonlinear and will depend on the kernel that you choose.

    • @GeoffryGifari
      @GeoffryGifari 6 місяців тому +1

      @@ritvikmath ahh i see. so I can get something like polynomial interpolation of μ'(x) if I pick the right kernel?
      thinking about it, straight line for the mean makes sense if our known data vector is the only thing that matters, but to get something "curvier" it makes sense that the distribution at one point is affected by the points nearby

  • @ireoluwaTH
    @ireoluwaTH 6 місяців тому

    Definitely interested in the math!
    Thank you for another remarkable exposition Ritvik...

  • @vaibhavgupta2471
    @vaibhavgupta2471 6 місяців тому +1

    Great explanation- really appreciate your effort in explaining this

    • @ritvikmath
      @ritvikmath  6 місяців тому

      Glad it was helpful!

  • @MecchaKakkoi
    @MecchaKakkoi 6 місяців тому +2

    Thanks for the great explanation!

  • @meanreversion1083
    @meanreversion1083 6 місяців тому

    Thank you for the video. it was nicely explained. There are a lot of simplifications. Could you also talk about how best select sigma and l - is it all done empirically? also do you have any example of implementation?

  • @imtryinghere1
    @imtryinghere1 Місяць тому

    "Thanks for teaching me Gaussian processes - you're mean-t to be my tutor! And trust me, that's no variance from the truth." ChatGPT StatsDad Joke.

  • @astrophage381
    @astrophage381 6 місяців тому

    I'm a simple man. When Ritvik posts, I watch.

  • @marcfruchtman9473
    @marcfruchtman9473 6 місяців тому +3

    Thanks for this explanation. Ah, now if I can just convince the fish to swim in a normal distribution when I fish...

  • @annamalaisriram7256
    @annamalaisriram7256 6 місяців тому

    Well explained , Timely need

    • @ritvikmath
      @ritvikmath  6 місяців тому

      Thanks, hope it helped!

  • @SarthakGupta-b1x
    @SarthakGupta-b1x 21 день тому

    Sorry I am new to this topic and the math behind it.. If this is a covariance matrix, then higher values should mean greater variance --> less correlation right? But in this case higher values mean more correlation for points closer together. I am confused as to why this is the case.

  • @davidheilbron
    @davidheilbron 6 місяців тому

    Thank you so much

  • @bin4ry_d3struct0r
    @bin4ry_d3struct0r 6 місяців тому

    I only took Intro to Stats, so I never learned about kernels or even conditional distributions. Nonetheless, this is very interesting!

  • @JonathanFraser-i7h
    @JonathanFraser-i7h 6 місяців тому +7

    modelling an integer quantity which must be greater than zero and chooses gaussian over poisson.....tsk tsk.

    • @ritvikmath
      @ritvikmath  6 місяців тому

      Haha fair point! That’s what I get for trying to use a too-simple example 😆

    • @JonathanFraser-i7h
      @JonathanFraser-i7h 6 місяців тому

      @@ritvikmath Oh of course, but it wouldn't be youtube without the trolls. I felt I needed to truely be part of the community.

    • @johningham1880
      @johningham1880 6 місяців тому +6

      Poisson is the perfect distribution for fish

    • @JonathanFraser-i7h
      @JonathanFraser-i7h 6 місяців тому

      @@johningham1880 sounds like something so crazy only a Frenchman would say it.

  • @SleekGreek
    @SleekGreek 6 місяців тому +2

    I love you.

  • @SlovakiaPanda
    @SlovakiaPanda 17 днів тому

    Ypu are good, but you should just stick to the white board version

  • @MrEo89
    @MrEo89 6 місяців тому

    Man your channel would blow up spectacularly if you invested the time into learning how to make really nice visuals.. the whole poorly hand drawn example thing is really 2005 && screams laziness and/or amateur..