Options
A Comparison of Bayesian Deep Learning for Out of Distribution Detection and Uncertainty Estimation
Date Issued
2020-07-17
Date Available
2024-02-09T17:22:07Z
Abstract
Deep neural networks have been successful in diverse discriminitive classification tasks. Despite their good prediction performance, they are poorly calibrated– i.e., often assigns high confidence to misclassified predictions. Potential consequences could lead to trustworthiness and accountability of models deployed in real applications, where predictions are evaluated based on their confidence scores. In this work we propose to validate and test the efficacy of likelihood based models in the task of out-of-distribution (OoD) detection. On different datasets and metrics we show that Bayesian deep learning models on certain occasions marginally outperform conventional neural networks and in the event of minimal overlap between in/out distribution classes, even the best models exhibit a reduction in AUC scores. Preliminary investigations indicate the potential inherent role of bias due to choices of initialisation, architecture or activation functions.
Sponsorship
Science Foundation Ireland
Type of Material
Conference Publication
Copyright (Published Version)
2020 the Authors
Language
English
Status of Item
Peer reviewed
Conference Details
The ICML 2020 Workshop on Uncertainty & Robustness in Deep Learning (ICML UDL 2020), Virtual Conference, 17 July 2020
This item is made available under a Creative Commons License
File(s)
Loading...
Name
UDL2020-paper-064.pdf
Size
158.06 KB
Format
Adobe PDF
Checksum (MD5)
28f8221e8612e9376bbd43901dcaf575
Owning collection