Ramifications of Approximate Posterior Inference for Bayesian Deep Learning in Adversarial and Out-of-Distribution Settings

Files in This Item:
 File SizeFormat
Download2009.01798.pdf790.95 kBAdobe PDF
Title: Ramifications of Approximate Posterior Inference for Bayesian Deep Learning in Adversarial and Out-of-Distribution Settings
Authors: Mitros, John (Ioannis)Pakrashi, ArjunMacNamee, Brian
Permanent link: http://hdl.handle.net/10197/12575
Date: 23-Aug-2020
Online since: 2021-10-26T11:22:09Z
Abstract: Deep neural networks have been successful in diverse discriminative classification tasks, although, they are poorly calibrated often assigning high probability to misclassified predictions. Potential consequences could lead to trustworthiness and accountability of the models when deployed in real applications, where predictions are evaluated based on their confidence scores. Existing solutions suggest the benefits attained by combining deep neural networks and Bayesian inference to quantify uncertainty over the models’ predictions for ambiguous data points. In this work we propose to validate and test the efficacy of likelihood based models in the task of out of distribution detection (OoD). Across different datasets and metrics we show that Bayesian deep learning models indeed outperform conventional neural networks but in the event of minimal overlap between in/out distribution classes, even the best models exhibit a reduction in AUC scores in detecting OoD data. We hypothesise that the sensitivity of neural networks to unseen inputs could be a multi-factor phenomenon arising from the different architectural design choices often amplified by the curse of dimensionality. Preliminary investigations indicate the potential inherent role of bias due to choices of initialisation, architecture or activation functions. Furthermore, we perform an analysis on the effect of adversarial noise resistance methods regarding in and out-of-distribution performance when combined with Bayesian deep learners.
Funding Details: Science Foundation Ireland
Type of material: Book Chapter
Publisher: Springer
Series/Report no.: Lecture Notes in Computer Science; 12535
Copyright (published version): 2020 Springer
Keywords: OoD detectionBayesian deep learningUncertainty quantificationGeneralisationAnomaliesOutliers
DOI: 10.1007/978-3-030-66415-2_5
Other versions: https://eccv2020.eu
Language: en
Status of Item: Peer reviewed
Is part of: Bartoli, A. and Fusiello, A. (eds.).Computer Vision – ECCV 2020 Workshops. ECCV 2020: Glasgow, UK, August 23–28, 2020, Proceedings, Part I
Conference Details: The 16th European Conference on Computer Vision (ECCV 2020), Online Conference, 23-28 August 2020
ISBN: 978-3-030-66414-5
This item is made available under a Creative Commons License: https://creativecommons.org/licenses/by-nc-nd/3.0/ie/
Appears in Collections:Computer Science Research Collection
Insight Research Collection

Show full item record

Page view(s)

162
Last Week
2
Last month
checked on Nov 30, 2021

Download(s)

19
checked on Nov 30, 2021

Google ScholarTM

Check

Altmetric


If you are a publisher or author and have copyright concerns for any item, please email research.repository@ucd.ie and the item will be withdrawn immediately. The author or person responsible for depositing the article will be contacted within one business day.