Ramifications of Approximate Posterior Inference for Bayesian Deep Learning in Adversarial and Out-of-Distribution Settings

DC FieldValueLanguage
dc.contributor.authorMitros, John (Ioannis)-
dc.contributor.authorPakrashi, Arjun-
dc.contributor.authorMacNamee, Brian-
dc.date.accessioned2021-10-26T11:22:09Z-
dc.date.available2021-10-26T11:22:09Z-
dc.date.copyright2020 Springeren_US
dc.date.issued2020-08-23-
dc.identifier.isbn978-3-030-66414-5-
dc.identifier.urihttp://hdl.handle.net/10197/12575-
dc.descriptionThe 16th European Conference on Computer Vision (ECCV 2020), Online Conference, 23-28 August 2020en_US
dc.description.abstractDeep neural networks have been successful in diverse discriminative classification tasks, although, they are poorly calibrated often assigning high probability to misclassified predictions. Potential consequences could lead to trustworthiness and accountability of the models when deployed in real applications, where predictions are evaluated based on their confidence scores. Existing solutions suggest the benefits attained by combining deep neural networks and Bayesian inference to quantify uncertainty over the models’ predictions for ambiguous data points. In this work we propose to validate and test the efficacy of likelihood based models in the task of out of distribution detection (OoD). Across different datasets and metrics we show that Bayesian deep learning models indeed outperform conventional neural networks but in the event of minimal overlap between in/out distribution classes, even the best models exhibit a reduction in AUC scores in detecting OoD data. We hypothesise that the sensitivity of neural networks to unseen inputs could be a multi-factor phenomenon arising from the different architectural design choices often amplified by the curse of dimensionality. Preliminary investigations indicate the potential inherent role of bias due to choices of initialisation, architecture or activation functions. Furthermore, we perform an analysis on the effect of adversarial noise resistance methods regarding in and out-of-distribution performance when combined with Bayesian deep learners.en_US
dc.description.sponsorshipScience Foundation Irelanden_US
dc.language.isoenen_US
dc.publisherSpringeren_US
dc.relation.ispartofBartoli, A. and Fusiello, A. (eds.).Computer Vision – ECCV 2020 Workshops. ECCV 2020: Glasgow, UK, August 23–28, 2020, Proceedings, Part Ien_US
dc.relation.ispartofseriesLecture Notes in Computer Scienceen_US
dc.relation.ispartofseries12535en_US
dc.rightsThe final publication is available at www.springerlink.com.en_US
dc.subjectOoD detectionen_US
dc.subjectBayesian deep learningen_US
dc.subjectUncertainty quantificationen_US
dc.subjectGeneralisationen_US
dc.subjectAnomaliesen_US
dc.subjectOutliersen_US
dc.titleRamifications of Approximate Posterior Inference for Bayesian Deep Learning in Adversarial and Out-of-Distribution Settingsen_US
dc.typeBook Chapteren_US
dc.internal.authorcontactotherbrian.macnamee@ucd.ieen_US
dc.internal.webversionshttps://eccv2020.eu-
dc.statusPeer revieweden_US
dc.identifier.doi10.1007/978-3-030-66415-2_5-
dc.neeo.contributorMitros|John (Ioannis)|aut|-
dc.neeo.contributorPakrashi|Arjun|aut|-
dc.neeo.contributorMacNamee|Brian|aut|-
dc.date.embargo2021-08-23en_US
dc.date.updated2021-01-22T23:19:18Z-
dc.identifier.grantid15/CDA/3520-
dc.rights.licensehttps://creativecommons.org/licenses/by-nc-nd/3.0/ie/en_US
item.grantfulltextopen-
item.fulltextWith Fulltext-
Appears in Collections:Computer Science Research Collection
Insight Research Collection
Files in This Item:
 File SizeFormat
Download2009.01798.pdf790.95 kBAdobe PDF
Show simple item record

Page view(s)

198
Last Week
4
Last month
24
checked on Jan 20, 2022

Download(s)

63
checked on Jan 20, 2022

Google ScholarTM

Check

Altmetric


If you are a publisher or author and have copyright concerns for any item, please email research.repository@ucd.ie and the item will be withdrawn immediately. The author or person responsible for depositing the article will be contacted within one business day.