Options
A Categorisation of Post-hoc Explanations for Predictive Models
Author(s)
Date Issued
2019-03-27
Date Available
2021-05-26T10:33:57Z
Abstract
The ubiquity of machine learning based predictive models inmodern society naturally leads people to ask how trustworthythose models are? In predictive modeling, it is quite commonto induce a trade-off between accuracy and interpretability.For instance, doctors would like to know how effective sometreatment will be for a patient or why the model suggesteda particular medication for a patient exhibiting those symptoms? We acknowledge that the necessity for interpretabilityis a consequence of an incomplete formalisation of the prob-lem, or more precisely of multiple meanings adhered to a par-ticular concept. For certain problems, it is not enough to getthe answer (what), the model also has to provide an expla-nation of how it came to that conclusion (why), because acorrect prediction, only partially solves the original problem.In this article we extend existing categorisation of techniquesto aid model interpretability and test this categorisation
Sponsorship
Science Foundation Ireland
Other Sponsorship
Insight Research Centre
Type of Material
Conference Publication
Publisher
Association for the Advancement of Artificial Intelligence
Copyright (Published Version)
2018 Association for the Advancement of Artificial Intelligence
Web versions
Language
English
Status of Item
Peer reviewed
Conference Details
The 2019 Association for the Advancement of Artificial Intelligence (AAAI) Spring Symposia on Combining Machine Learning with Knowledge Engineering (AAAI-MAKE 2019), Palo Alto, California, 25-27 March 2019
This item is made available under a Creative Commons License
File(s)
Loading...
Name
insight_publication.pdf
Size
281.01 KB
Format
Adobe PDF
Checksum (MD5)
32585bda06daa3bf42a175fe4819dfad
Owning collection
Mapped collections