MacNamee, BrianBrianMacNameePakrashi, ArjunArjunPakrashiMitros, John (Ioannis)John (Ioannis)Mitros2024-02-092024-02-092020 the A2020-07-17http://hdl.handle.net/10197/25420The ICML 2020 Workshop on Uncertainty & Robustness in Deep Learning (ICML UDL 2020), Virtual Conference, 17 July 2020Deep neural networks have been successful in diverse discriminitive classification tasks. Despite their good prediction performance, they are poorly calibrated– i.e., often assigns high confidence to misclassified predictions. Potential consequences could lead to trustworthiness and accountability of models deployed in real applications, where predictions are evaluated based on their confidence scores. In this work we propose to validate and test the efficacy of likelihood based models in the task of out-of-distribution (OoD) detection. On different datasets and metrics we show that Bayesian deep learning models on certain occasions marginally outperform conventional neural networks and in the event of minimal overlap between in/out distribution classes, even the best models exhibit a reduction in AUC scores. Preliminary investigations indicate the potential inherent role of bias due to choices of initialisation, architecture or activation functions.enAnomaly detectionOut-of-distribution detection (OoD)Bayesian deep learningA Comparison of Bayesian Deep Learning for Out of Distribution Detection and Uncertainty EstimationConference Publication2021-01-2415/CDA/3520https://creativecommons.org/licenses/by-nc-nd/3.0/ie/