Repository logo
  • Log In
    New user? Click here to register.Have you forgotten your password?
University College Dublin
    Colleges & Schools
    Statistics
    All of DSpace
  • Log In
    New user? Click here to register.Have you forgotten your password?
  1. Home
  2. College of Science
  3. School of Computer Science
  4. Computer Science Research Collection
  5. Ramifications of Approximate Posterior Inference for Bayesian Deep Learning in Adversarial and Out-of-Distribution Settings
 
  • Details
Options

Ramifications of Approximate Posterior Inference for Bayesian Deep Learning in Adversarial and Out-of-Distribution Settings

Author(s)
Mitros, John (Ioannis)  
Pakrashi, Arjun  
MacNamee, Brian  
Uri
http://hdl.handle.net/10197/12575
Date Issued
2020-08-23
Date Available
2021-10-26T11:22:09Z
Embargo end date
2021-08-23
Abstract
Deep neural networks have been successful in diverse discriminative classification tasks, although, they are poorly calibrated often assigning high probability to misclassified predictions. Potential consequences could lead to trustworthiness and accountability of the models when deployed in real applications, where predictions are evaluated based on their confidence scores. Existing solutions suggest the benefits attained by combining deep neural networks and Bayesian inference to quantify uncertainty over the models’ predictions for ambiguous data points. In this work we propose to validate and test the efficacy of likelihood based models in the task of out of distribution detection (OoD). Across different datasets and metrics we show that Bayesian deep learning models indeed outperform conventional neural networks but in the event of minimal overlap between in/out distribution classes, even the best models exhibit a reduction in AUC scores in detecting OoD data. We hypothesise that the sensitivity of neural networks to unseen inputs could be a multi-factor phenomenon arising from the different architectural design choices often amplified by the curse of dimensionality. Preliminary investigations indicate the potential inherent role of bias due to choices of initialisation, architecture or activation functions. Furthermore, we perform an analysis on the effect of adversarial noise resistance methods regarding in and out-of-distribution performance when combined with Bayesian deep learners.
Sponsorship
Science Foundation Ireland
Type of Material
Book Chapter
Publisher
Springer
Series
Lecture Notes in Computer Science
12535
Copyright (Published Version)
2020 Springer
Subjects

OoD detection

Bayesian deep learnin...

Uncertainty quantific...

Generalisation

Anomalies

Outliers

DOI
10.1007/978-3-030-66415-2_5
Web versions
https://eccv2020.eu
Language
English
Status of Item
Peer reviewed
Journal
Bartoli, A. and Fusiello, A. (eds.).Computer Vision – ECCV 2020 Workshops. ECCV 2020: Glasgow, UK, August 23–28, 2020, Proceedings, Part I
Conference Details
The 16th European Conference on Computer Vision (ECCV 2020), Online Conference, 23-28 August 2020
ISBN
978-3-030-66414-5
This item is made available under a Creative Commons License
https://creativecommons.org/licenses/by-nc-nd/3.0/ie/
File(s)
No Thumbnail Available
Name

2009.01798.pdf

Size

790.95 KB

Format

Adobe PDF

Checksum (MD5)

0c9c6c1c4b2e7a893f49307ed4fd880b

Owning collection
Computer Science Research Collection
Mapped collections
Insight Research Collection

Item descriptive metadata is released under a CC-0 (public domain) license: https://creativecommons.org/public-domain/cc0/.
All other content is subject to copyright.

For all queries please contact research.repository@ucd.ie.

Built with DSpace-CRIS software - Extension maintained and optimized by 4Science

  • Cookie settings
  • Privacy policy
  • End User Agreement