Repository logo
  • Log In
    New user? Click here to register.Have you forgotten your password?
University College Dublin
  • Colleges & Schools
  • Statistics
  • All of DSpace
  • Log In
    New user? Click here to register.Have you forgotten your password?
  1. Home
  2. College of Science
  3. School of Computer Science
  4. Computer Science Research Collection
  5. Ramifications of Approximate Posterior Inference for Bayesian Deep Learning in Adversarial and Out-of-Distribution Settings
 
  • Details
Options

Ramifications of Approximate Posterior Inference for Bayesian Deep Learning in Adversarial and Out-of-Distribution Settings

File(s)
FileDescriptionSizeFormat
Download 2009.01798.pdf790.95 KB
Author(s)
Mitros, John (Ioannis) 
Pakrashi, Arjun 
MacNamee, Brian 
Uri
http://hdl.handle.net/10197/12575
Date Issued
23 August 2020
Date Available
26T11:22:09Z October 2021
Abstract
Deep neural networks have been successful in diverse discriminative classification tasks, although, they are poorly calibrated often assigning high probability to misclassified predictions. Potential consequences could lead to trustworthiness and accountability of the models when deployed in real applications, where predictions are evaluated based on their confidence scores. Existing solutions suggest the benefits attained by combining deep neural networks and Bayesian inference to quantify uncertainty over the models’ predictions for ambiguous data points. In this work we propose to validate and test the efficacy of likelihood based models in the task of out of distribution detection (OoD). Across different datasets and metrics we show that Bayesian deep learning models indeed outperform conventional neural networks but in the event of minimal overlap between in/out distribution classes, even the best models exhibit a reduction in AUC scores in detecting OoD data. We hypothesise that the sensitivity of neural networks to unseen inputs could be a multi-factor phenomenon arising from the different architectural design choices often amplified by the curse of dimensionality. Preliminary investigations indicate the potential inherent role of bias due to choices of initialisation, architecture or activation functions. Furthermore, we perform an analysis on the effect of adversarial noise resistance methods regarding in and out-of-distribution performance when combined with Bayesian deep learners.
Sponsorship
Science Foundation Ireland
Type of Material
Book Chapter
Publisher
Springer
Series
Lecture Notes in Computer Science
12535
Copyright (Published Version)
2020 Springer
Keywords
  • OoD detection

  • Bayesian deep learnin...

  • Uncertainty quantific...

  • Generalisation

  • Anomalies

  • Outliers

DOI
10.1007/978-3-030-66415-2_5
Web versions
https://eccv2020.eu
Language
English
Status of Item
Peer reviewed
Part of
Bartoli, A. and Fusiello, A. (eds.).Computer Vision – ECCV 2020 Workshops. ECCV 2020: Glasgow, UK, August 23–28, 2020, Proceedings, Part I
Description
The 16th European Conference on Computer Vision (ECCV 2020), Online Conference, 23-28 August 2020
ISBN
978-3-030-66414-5
This item is made available under a Creative Commons License
https://creativecommons.org/licenses/by-nc-nd/3.0/ie/
Owning collection
Computer Science Research Collection
Scopus© citations
0
Acquisition Date
Jan 25, 2023
View Details
Views
374
Last Week
1
Acquisition Date
Jan 26, 2023
View Details
Downloads
387
Last Week
2
Acquisition Date
Jan 26, 2023
View Details
google-scholar
University College Dublin Research Repository UCD
The Library, University College Dublin, Belfield, Dublin 4
Phone: +353 (0)1 716 7583
Fax: +353 (0)1 283 7667
Email: mailto:research.repository@ucd.ie
Guide: http://libguides.ucd.ie/rru

Built with DSpace-CRIS software - Extension maintained and optimized by 4Science

  • Cookie settings
  • Privacy policy
  • End User Agreement