Thank you so much for this tutorial! It's the only resource i have found yet with an intuitive example and explanation to BQ. I have a question regarding the variance of Z in BQ. There are 3 x's being considered when estimating the variance, with X being the training data, and then x, and x' as well. if k(x,x') is a kernel function for any two given inputs, how can one picture k(x,X) and k(X,x')? Thank you again!
Hi, am also a little bit confused by the fact that there are two input (x, x'), which is writing separately here as k(x,X) and k(X, x') was thinking maybe another formulation written as $V = E_{x}k(x, x) - E_(x)k(x,X)K^{-1}k(x,X)^T$ could be little bit more clear, where the 2 input: x, and x' are all concatenated in the input x, what do you think? And the diag part of E_{x}k(x, x) should be exactly equal to the original k(x, x), is this understanding correct?
dont you think we can improve the performance of trapezoidal method by using boosting concept?? trapezoidal is a weak estimator. Boosting weak estimators may have as well performance as Simpson method.
Finally a great tutorial on Bayesian Quadrature. Thank you..!!
Thank you so much for the tutorial!
elite content
shifting with rotation the nodes a little bit would create other weak estimators.
yeah, Gaussian quadrature is exactly the boosting concept.
Thank you so much for this tutorial! It's the only resource i have found yet with an intuitive example and explanation to BQ.
I have a question regarding the variance of Z in BQ.
There are 3 x's being considered when estimating the variance, with X being the training data, and then x, and x' as well.
if k(x,x') is a kernel function for any two given inputs, how can one picture k(x,X) and k(X,x')?
Thank you again!
Hi, am also a little bit confused by the fact that there are two input (x, x'), which is writing separately here as k(x,X) and k(X, x')
was thinking maybe another formulation written as $V = E_{x}k(x, x) - E_(x)k(x,X)K^{-1}k(x,X)^T$ could be little bit more clear, where the 2 input: x, and x' are all concatenated in the input x, what do you think?
And the diag part of E_{x}k(x, x) should be exactly equal to the original k(x, x), is this understanding correct?
dont you think we can improve the performance of trapezoidal method by using boosting concept?? trapezoidal is a weak estimator. Boosting weak estimators may have as well performance as Simpson method.
hahaha, generally, lazy is goooooood.