Explaining Artificial Neural Networks With Post-Hoc Explanation-By-Example

DC FieldValueLanguage
dc.contributor.authorKenny, Eoin M.-
dc.date.accessioned2022-11-24T12:05:14Z-
dc.date.available2022-11-24T12:05:14Z-
dc.date.copyright2022 the Authoren_US
dc.date.issued2022-
dc.identifier.urihttp://hdl.handle.net/10197/13265-
dc.description.abstractExplainable artificial intelligence (XAI) has become one of the most popular research areas in AI the past several years, with many workshops, conferences, and government/industry research programs now dedicated to the topic. Historically, one of the main avenues for this type of research was based around showing previous examples to explain or justify an automated prediction by an AI, and these explanations have seen a resurgence recently to help deal with the opaque nature of modern black-box deep learning systems because of how they mimic human reasoning. However, recent implementations of this explanation strategy do not abstract the black-box AI’s reasoning in a faithful way, or focus on important features used in a prediction. Moreover, generated synthetic examples shown are often lacking in plausibility. This thesis explores all these avenues both computationally and in user testing. The results demonstrate (1) a novel approach called twin-systems for computing nearest neighbour explanations which have the highest fidelity to the AI system it is explaining relative to other state-of-the-art methods, (2) the introduction of a novel XAI approach which focuses on specific “parts” of the explanations in twin-systems, (3) that these explanations have the effect of making misclassifications seem less incorrect in user testing, and (4) that other options aside from nearest neighbour explanations (e.g., semi-factuals) are valid options and deserve more attention in the scientific community.en_US
dc.language.isoenen_US
dc.publisherUniversity College Dublin. School of Computer Scienceen_US
dc.subjectArtificial neural networksen_US
dc.subjectExplanation-by-exampleen_US
dc.subjectExplainable AIen_US
dc.subjectCase-based reasoningen_US
dc.titleExplaining Artificial Neural Networks With Post-Hoc Explanation-By-Exampleen_US
dc.typeDoctoral Thesisen_US
dc.statusPeer revieweden_US
dc.type.qualificationnamePh.D.en_US
dc.neeo.contributorKenny|Eoin M.|aut|-
dc.description.admin2022-11-08 JG: Author's signature removed from PDFen_US
dc.date.updated2022-11-08en
dc.rights.licensehttps://creativecommons.org/licenses/by-nc-nd/3.0/ie/en_US
dc.contributor.orcid0000-0001-5800-2525en
dc.type.qualificationnamefreetextPhDen_US
item.fulltextWith Fulltext-
item.grantfulltextopen-
Appears in Collections:Computer Science Theses
Files in This Item:
 File SizeFormat
Download104118311.pdf38.13 MBAdobe PDF
Show simple item record

Page view(s)

42
checked on Nov 29, 2022

Download(s)

3
checked on Nov 29, 2022

Google ScholarTM

Check


If you are a publisher or author and have copyright concerns for any item, please email research.repository@ucd.ie and the item will be withdrawn immediately. The author or person responsible for depositing the article will be contacted within one business day.