Machine Learning Lecture 26 "Gaussian Processes" -Cornell CS4780 SP17
Вставка
- Опубліковано 15 тра 2024
- Cornell class CS4780. (Online version: tinyurl.com/eCornellML )
GPyTorch GP implementatio: gpytorch.ai/
Lecture Notes:
www.cs.cornell.edu/courses/cs4...
Small corrections:
Minute 14: it should be P(y,w|x,D) and not P(y|x,w,D) sorry about that typo.
Also the variance term in 40:20 should be K** - K* K^-1 K*.
I got my first data science internship after watching all lectures. And now revisiting it during the quarantine and still benefit a lot. This whole series is a legend, thank you so much, professor Killian! Stay safe and healthy!
Awesome! I am happy they are useful to you!
He is the most interesting ML professor that I Ever seen on the Internet.
That was a truly amazing lecture from an intuitive teaching perspective. I LOVE THE ENERGY!
The best GP lecture I've found. Simple enough and makes sense.
amazing lecture in every possible aspect: bright, funny, full of energy... a true inspiration!
There are no words to describe the power of the intelligence in the lecture , thanks a lot for sharing it.
Best prof that Ive experienced so far. I love the way he tries to build sensible intuition behind the math. FYI, Love the sense of humour
This is a great lecture, thanks for sharing it. I also appreciate that you took the time to add the lecture corrections.
Loved that "the answer will always be Gaussian, the whole lecture!" moment.
Hooray! Gaussian process for dummies! Exactly what I was looking for
Thank you very much.
People like these are truly a gift to our mankind!
Very intuitive and easy to follow. Loved it!
the best GP lecture ever, impressive work (Y)
Awesome explanation. That house example explains in very layman’s terms.
explanation was great ! thanks a lot .it would be great if you upload other courses videos you taught at cornell because everyone is not lucky to get aa teacher like you :)
Thanks for such good lecture and nice explanation, I was struggling of understanding gaussian process for a while until I saw your viedeo
Currently writing my master thesis working with bayesian optimization, thank god I found this video!
Best lecture on GPs! Thanks.
8 hours of scraping the internet, but the 9th was the successful one. You sir, have explained and answered all questions I had on the subject, and raised much more interesting ones. Thank you ver much!
Sir your lectures are really amazing, you give so many insights I would've never thought of. Thank you
This is a more intuitive explanation than the Sheffield summer school GP videos
Thanks Kilian, this was really insightful!
Professor Killian you are truly an amazing professor
I love you. Thank you for explaining on why you can model it as a gaussian.
It might have only 112 likes & ~5000 views at the moment while I comment, but it will have profound influence to the people who watched it & it would stick to the minds!
The comparision with the houses prices to explain the covariance was very pertinent. I never heard it elsewhere. Thanks !
From one Kilian to another! :-)
Best lecture on Gaussian Processes
amazing, I really love the energy of teacher.
AMAZING LECTURER
Woo this is Andrew Ng level explanations!! Thank you for making these videos. :)
Thanks for the sharing the excellent lecture. @27:00 About the house's price: The contour plot was drawn always in the first quadrant, but the Gaussian contours should have been extended over the entire plane. This actually is a drawback of the Gaussian: While we know that the house's price can't be negative, and we do not wish to consider the negative range in out model at all, we can't avoid it: The Gaussian would allow for non-zero probability for the negative price intervals as well.
exactly, I was so confused why he drew it as a peak rather than a ridge
Thank you so much for this clear lecture :D It helped me a lot!!
Thank you so much for the incredible lecture and for sharing the content on UA-cam! I'm a first year Master's student and this is really helping me self study a lot of the content I didn't learn in undergrad. I hope I can be a professor like this one day.
Insane lecture. This helped so much, thank you.
just binge watching your course i love it...is there link to homework, exam and solutions for the same... it would be helpful
Such a wonderful lecture!
"What the bleep" HAHAH, it was genuinely interesting to look at regression from this perspective!
Thankyou for an Amazing lecture sir!! :)
Really nice explanation.
Looooove this lecture!
amazing explanation!
The last demo was great for understanding gp
Really awesome teaching
Brilliant and interesting !
Kilian Is ML God. Why so less views compared to crappy lectures getting so many, and this gold playlist still less. I hope people dont find it and struggle to decrease competition. But still Kilian is God, and gold series. Please upload deep learning also.
Your lecture is really really good! I have a question here, If the input also have noise, how we can use the beyesian linear regression? In most book it mention the gaussian noise in the label, But I think it also quite possible have some noise in the input X.
What a teacher!!
Hypest GP lecture ever LOL
The professor's throat is unable to keep up with his excitement!
*Throat clearing sounds*
Thank you so much!
These lectures definitely have some problems... I have no idea why they are even more interesting than Netflix series lol
Before starting the lecture, I thought this must be an exaggeration, but after watching it, this is actually true!
Really insightful lecture series and I have to say gained a lot from it. An important correction in the beginning - Sums and products of Normal distributions are not always normal. Sum of two gaussians is gaussian only if they are independent or jointly normal. No such rule exists for products as far as I remember.
Yes, that came out wrong. What I wanted to say is the product of two normal PDFs is proportional to a normal PDF (which is something that comes up a lot in Bayesian statistics).
@@kilianweinberger698 Thanks for replying. I am not sure that I understand what you meant by proportional to a normal. Product of two normals generally is in the form of a combination of chi square variables : XY = ((X+Y)^2 - (X-Y)^2)/4. Please correct me if I am missing something
@@rorschach3005 I had the same reaction and I think I've resolved it. "product of Gaussians" can be interpreted two different ways. You and I considered the distribution of z where z=x*y and x and y are Gaussian. By this definition, z is definitely not Gaussian. KW is saying that if you define the pdf of z to be the product of the pdfs of x and y, normalized, then z is Gaussian. This is the property exploited in the motivating integral - note that probability densities are multiplied, but actual random variables are never multiplied.
It is worth pointing out that while there is no specific model there is an analytic model being assumed. In this case he assumed a linear model
Thanks for the brilliant lecture! One confusion if I may: since 39:18 you change the conditional probability P( y1...yn | x1 .. xn) based on data D to P(y1 ... yn, y_test | x1 ... xn, x_test) ... questions are 1) before test data point, do we already have a joint distribution P(y1 ... yn, x1 ... xn) based on D? 2) once test point comes in, we need form another Gaussian distribution N(mean, variance) for (y1 ... yn, x1 ... xn, y_test , x_test) ? if so how to get covariance term between test data point with each training data? So basically for prediction, I have new x_test, what are the exact parameters we can get for y_test distribution (how to get the mean and variance)? Many Thanks!
18.08 I have a doubt we are not constructing a line instead we are comparing with every possible lines near by does that mean we are indirectly taking the W using covariance matrix.
Thanks for a great lecture. I am bit confused about the uncertainty estimates. How can we formally argue that the posterior variance at any point is telling us something really useful? For example, let's say we consider a simple setup where the training data is generated as y_i = f(x_i) + N(0,sigma^2), i = 1,..n and f is a sample path of the GP(0,k). Then is it possible to construct a high probability confidence band that traps the ground truth f_i using the posterior covariance and mean functions? After all, if I understood correctly, the main plus point of GP regression over kernel ridge regression is due to the posterior covariance.
I actually found all my questions answered at this paper arxiv.org/pdf/0912.3995.pdf which is the test of time paper at ICML 2020
What was the question at 14:30 anyone know? Brilliant lecture - easily a new all time favourite.
Beautiful
Great lectures! Really enjoyable. There's an important mistake at 40:20, I think? The variance is not K** K^-1 K*, as kilian wrote, but rather it is K** - K* K^-1 K*.
Yes, good catch! Thanks for pointing this out. Luckily it is correct in the notes: www.cs.cornell.edu/courses/cs4780/2018fa/lectures/lecturenote15.html
Which course to learn Data science or Machine learning
Professor Kilian, I don't understand how did you write mean= K*K^ -1Y and variance = K** -K*K^-1 K* for the normal distribution?
It is just the conditional distribution for the Gaussian ( see e.g. en.wikipedia.org/wiki/Multivariate_normal_distribution#Conditional_distributions , here Sigma is our K)
Thanks
Really making concrete what I've known about ML for some time. There is no such thing as ML, it is all just glorified correlation :P
I think there is a typo at 40:18 for the variance.
Yes, i agree. The variance is not K** K^-1 K*, as kilian wrote, but rather it is K** - K* K^-1 K*.
Since p(xi,yi;w) = p(yi|xi;w)p(xi) and during MLE and MAP we ignore p(xi), as it is independent of w, to get the likelihood function (multiply from i to n -> p(yi|xi;w)). But here, why do we simply start with P(D;w) as equal to the likelihood function. Shouldn't P(D;w) be equal to (Multiply from i to n -> p(yi|xi;w) p(xi) ) where p(xi) is some arbitrary dist as it is independent of w and no assumptions are made about it, while p(yi|xi;w) is a Gaussian. Since only multiplying Gaussian with Gaussian gives us a Gaussian, how is the answer a Gaussian when p(xi) is not a Gaussian.
Ignoring p(xi) during MLE and MAP makes a lot of sense as it is independent of theta, but why wasn't it been included when writing P(D;w) in the first place.
Do we just assume that since xi are given to us and we don't model p(xi), p(xi) is a constant for all xi?? Can anyone help???
Also, thank you for the lectures Prof.
The trick is that P(D;w) is inside a maximization with respect to the parameters w. Because P(x_i) is independent of w, it is just a constant we can drop.
max_w P(D;w)=max_w \PI_i P(x_i,y_i;w)=max_w (\PI_i P(y_i|x_i;w)) * (PI_i P(x_i) )
This last term is a multiplicative constant that you can pull out of the maximization and drop, as it won’t affect your choice of w. (Here PI is the capital PI multiply symbol.)
Why is integral over w of P(y|x,w)P(w|D) equal to P(y|x,D) ? Is it because P(w|D) = P(w|D,x)?
P(y|x,w)P(w|D)=P(y,w|x,D)
If you now integrate out w you obtain P(y|x,D).
(Here x is the test point, and D is the training data.)
If you want to make it clearer you can also use the following intermediate step: P(y|x,w)=P(y|x,w,D). You can condition on D here, because y is conditionally independent of D, when x,w are given.
For the same reason you can write P(w|D)=P(w|D,x) as w does not depend on the test point x (it is only fitted on the training data).
Hope this helps.
"One line of julia... two lines of python!!" whats with all the python hate professor? :P
Oh come on, two isn't so bad, do you know how many it is in assembly? :P
Sorry but why correlation of 10 for POTUS example? Correlation can only be -1 ... 1?!
What is exactly k**? Isn't it always ones(m,m)?
No, depends on the kernel function. But it is the inner-product of the test point(s) with it-/themselves.
hahahah, sinh vien nao ma hieu duoc bai nay la thien tai :D:D:D:D pha tron tum lum :D:D roi qua di.
Lots of ppl do actually
Thanks for the content but please calm down, I'm getting a heart attack
Learning GP is so frustrated T.T
lol
just know about they used Gaussian processes to search the airplanes in the ocean. btw, I am from malaysia.
aha moment: 11:15 to 11:25
the Donald Trump bits were very funny!
ERRM
this giy has absolutely no charisma and has a controlling attitude. tone is not fluent
just binge watching your course i love it...is there link to homework, exam and solutions for the same... it would be helpful
Past 4780 exams are here: www.dropbox.com/s/zfr5w5bxxvizmnq/Kilian past Exams.zip?dl=0
Past 4780 Homeworks are here: www.dropbox.com/s/tbxnjzk5w67u0sp/Homeworks.zip?dl=0