How to handle Uncertainty in Deep Learning #1.2

Поділитися
Вставка
  • Опубліковано 29 чер 2024
  • ▬▬ Code ▬▬▬▬▬▬▬▬▬▬▬
    Colab Notebook: colab.research.google.com/dri...
    ▬▬ Used Music ▬▬▬▬▬▬▬▬▬▬▬
    Music from Uppbeat (free for Creators!):
    uppbeat.io/t/pryces/lateflights
    License code: 3O8NFX8WUHJBR2SB
    ▬▬ Used Videos ▬▬▬▬▬▬▬▬▬▬▬
    From these Pexels authors:
    Tom Fisk
    ▬▬ Timestamps ▬▬▬▬▬▬▬▬▬▬▬
    00:00 Introduction
    00:15 Dataset
    01:38 Model 1
    07:37 Model 2
    12:26 Model 3
    ▬▬ Support me if you like 🌟
    ►Link to this channel: bit.ly/3zEqL1W
    ►Support me on Patreon: bit.ly/2Wed242
    ►Buy me a coffee on Ko-Fi: bit.ly/3kJYEdl
    ►E-Mail: deepfindr@gmail.com
    ▬▬ My equipment 💻
    - Microphone: amzn.to/3DVqB8H
    - Microphone mount: amzn.to/3BWUcOJ
    - Monitors: amzn.to/3G2Jjgr
    - Monitor mount: amzn.to/3AWGIAY
    - Height-adjustable table: amzn.to/3aUysXC
    - Ergonomic chair: amzn.to/3phQg7r
    - PC case: amzn.to/3jdlI2Y
    - GPU: amzn.to/3AWyzwy
    - Keyboard: amzn.to/2XskWHP
    - Bluelight filter glasses: amzn.to/3pj0fK2
  • Наука та технологія

КОМЕНТАРІ • 15

  • @trevormiller931
    @trevormiller931 2 роки тому +4

    This is gold. Thanks for the thorough and great content!

  • @kevinkorfmann8780
    @kevinkorfmann8780 2 роки тому

    thanks, that was really helpful! Looking forward to part 3.

    • @DeepFindr
      @DeepFindr  2 роки тому

      Thank you! Next part is coming next week :)

  • @jiahao2709
    @jiahao2709 9 місяців тому +1

    One thing i want to say is that, test data usually only use once as test time, i think it is better call the "test data" as validation data.

  • @nguyenxuanthanh6988
    @nguyenxuanthanh6988 Рік тому

    Brilliant!!! These videos help me a lot in understanding uncertainty. Could you make more videos regarding this topic? Thank you so much.

  • @robertchamoun7914
    @robertchamoun7914 Рік тому

    Thanks!

  • @kenbobcorn
    @kenbobcorn Рік тому +1

    It's probably worth mentioning you are computing gradients on your test set by not setting torch.no_grad() for the test loop. This series is all about uncertainty so it's important you aren't computing gradients on your test set which leaks into your mu and var values, which in the end is contrary to what you are trying to show.

    • @DeepFindr
      @DeepFindr  Рік тому

      Hi! Good remark. But as long as you aren't running back propagation w.r.t. to the test loss it won't leak any information into the model weights. Torch.no_grad is mainly used for memory reasons

  • @shilpimajumder7917
    @shilpimajumder7917 2 роки тому

    Thanks for the video...i learn a lot..Please upload some videos of uncertainty estimation in image classification.

    • @DeepFindr
      @DeepFindr  2 роки тому

      Thanks! For image classification the same principles apply. You just have other layers (Conv2d) instead of Linear.

  • @clima3993
    @clima3993 Рік тому

    What if p(y|x) is not Gaussian? What is y is high dimensional?

    • @DeepFindr
      @DeepFindr  Рік тому

      There are alternative loss function for other distributions like Laplace ect. or you transform the target variable in some way to match a supported distribution.
      Multidimensionality is no problem for GaussianNNL Loss, it simply apply the calculation per dimesion and averages it.

    • @clima3993
      @clima3993 Рік тому

      @@DeepFindr Thanks for the helpful reply. A following up question: what about we use conditional generative model to handle aleatoric uncertainty?

    • @DeepFindr
      @DeepFindr  Рік тому

      Sure that's also a reasonable approach. You can learn the data distribution and detect out of distribution samples using generative models.