Stationary Activations for Uncertainty Calibration in Deep Learning

Поділитися
Вставка
  • Опубліковано 19 лип 2024
  • Presentation video for the paper: Lassi Meronen, Christabella Irwanto, and Arno Solin (2020). Stationary Activations for Uncertainty Calibration in Deep Learning. Advances in Neural Information Processing Systems (NeurIPS).
    arXiv preprint: arxiv.org/abs/2010.09494
  • Наука та технологія

КОМЕНТАРІ • 1

  • @nguyenngocly1484
    @nguyenngocly1484 3 роки тому

    You can have swapped around neural nets too. With fixed dot products (enacted with fast transforms) and adjustable activation functions. Parametric (adjustable) ReLU is anyway a known thing. Of course you have to prevent the first transform from taking the spectrum of the input which you can do using a random fixed pattern of sign flips. And use a final transform as a readout layer. The fast Walsh Hadamard transform is good.