Excellent series! Thanks for all of your hard work on this, its hugely helpful. I'm not sure if this even makes sense, but would it possible to build a model that handles aleatoric and epistemic uncertainty simultaneously? For instance, what if I combined an MLE model (estimating both mean and var) with monte carlo dropout?
Thank you! Yes, absolutely. I'm also doing this usually and then the combine both to give an overall uncertainty score for a prediction. :) There are also a couple of papers on this for example: - AutoDEUQ - A Deeper Look into Aleatoric and Epistemic Uncertainty Disentanglement
Thank you for the video. In Monte Carlo dropout method you used dropout probability p=0.2 and said that it worked better for your case and other people used higher drop out rates. I have been asking myself - aren't we going to get higher uncertainties with higher dropout rates and lower uncertainties for lower dropout rates and then how do we know what dropout rates would give us most realistic uncertainty? I would appreciate your feedback. Nick
Thanks for the nice work! I want to ask whether it is necessary to consider the combination of aleatoric and epistemic uncertainty or only either of them in real-world applications?
For example, in the case of deep ensemble, you also output the variance of each model, which can be used to estimate aleatoric uncertainty. But when you plot the epistemic uncertainty as the confidence in your figures. Does it mean that it is unnecessary to consider aleatoric uncertainty if you have estimated epistemic uncertainty?
I think I would try to separate them and use both. Aleatoric uncertainty will tell you when the data is noisy or generally when it's difficult to say an answer. But the epistemic uncertainty will tell you when your model is not sure - which usually indicates areas that are not covered by a lot of data.
Hey, This series about uncertainty is really nice, congratulations and thank you very much for posting it on youtube. May I ask how did you build your slides? They look aesthethically super pleasing to me :) Cheers, Adrian
@@DeepFindr Great thanks! I was wondering, if I can ask, what are the "extra parameter tweaking" that you mention in your video, that resulted in better uncertainty estimates? And, are those already included in the linked colab notebook? Thanks!!
Brilliant!!! These videos help me a lot in understanding uncertainty. Could you make more videos regarding this topic? Thank you so much.👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍
Great videos, please continue exploring this interesting and important aspect of NNs
Excellent series! Thanks for all of your hard work on this, its hugely helpful.
I'm not sure if this even makes sense, but would it possible to build a model that handles aleatoric and epistemic uncertainty simultaneously? For instance, what if I combined an MLE model (estimating both mean and var) with monte carlo dropout?
Same !!
Thank you!
Yes, absolutely. I'm also doing this usually and then the combine both to give an overall uncertainty score for a prediction. :)
There are also a couple of papers on this for example:
- AutoDEUQ
- A Deeper Look into Aleatoric and Epistemic Uncertainty Disentanglement
Very interesting!!
Thank you for the video. In Monte Carlo dropout method you used dropout probability p=0.2 and said that it worked better for your case and other people used higher drop out rates. I have been asking myself - aren't we going to get higher uncertainties with higher dropout rates and lower uncertainties for lower dropout rates and then how do we know what dropout rates would give us most realistic uncertainty? I would appreciate your feedback. Nick
Thanks for the nice work! I want to ask whether it is necessary to consider the combination of aleatoric and epistemic uncertainty or only either of them in real-world applications?
For example, in the case of deep ensemble, you also output the variance of each model, which can be used to estimate aleatoric uncertainty. But when you plot the epistemic uncertainty as the confidence in your figures. Does it mean that it is unnecessary to consider aleatoric uncertainty if you have estimated epistemic uncertainty?
I think I would try to separate them and use both.
Aleatoric uncertainty will tell you when the data is noisy or generally when it's difficult to say an answer.
But the epistemic uncertainty will tell you when your model is not sure - which usually indicates areas that are not covered by a lot of data.
@@DeepFindr Thanks for your kind answer! It is really helpful!👍
Keep the hard work, thanks!
Hey,
This series about uncertainty is really nice, congratulations and thank you very much for posting it on youtube. May I ask how did you build your slides? They look aesthethically super pleasing to me :)
Cheers,
Adrian
(I mean the slides from the previous videos on the theory, not from this one, of course)
I've responded to your mail :)
@@DeepFindr Great thanks! I was wondering, if I can ask, what are the "extra parameter tweaking" that you mention in your video, that resulted in better uncertainty estimates? And, are those already included in the linked colab notebook?
Thanks!!
I think I was referring to things like the model architecture, learning rate, and also model-specific parameters like ensemble size ect. :)
Brilliant!!! These videos help me a lot in understanding uncertainty. Could you make more videos regarding this topic? Thank you so much.👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍