Kenny, Eoin M.Eoin M.Kenny2022-11-242022-11-242022 the A2022http://hdl.handle.net/10197/13265Explainable artificial intelligence (XAI) has become one of the most popular research areas in AI the past several years, with many workshops, conferences, and government/industry research programs now dedicated to the topic. Historically, one of the main avenues for this type of research was based around showing previous examples to explain or justify an automated prediction by an AI, and these explanations have seen a resurgence recently to help deal with the opaque nature of modern black-box deep learning systems because of how they mimic human reasoning. However, recent implementations of this explanation strategy do not abstract the black-box AI’s reasoning in a faithful way, or focus on important features used in a prediction. Moreover, generated synthetic examples shown are often lacking in plausibility. This thesis explores all these avenues both computationally and in user testing. The results demonstrate (1) a novel approach called twin-systems for computing nearest neighbour explanations which have the highest fidelity to the AI system it is explaining relative to other state-of-the-art methods, (2) the introduction of a novel XAI approach which focuses on specific “parts” of the explanations in twin-systems, (3) that these explanations have the effect of making misclassifications seem less incorrect in user testing, and (4) that other options aside from nearest neighbour explanations (e.g., semi-factuals) are valid options and deserve more attention in the scientific community.enArtificial neural networksExplanation-by-exampleExplainable AICase-based reasoningExplaining Artificial Neural Networks With Post-Hoc Explanation-By-ExampleDoctoral Thesis2022-11-08https://creativecommons.org/licenses/by-nc-nd/3.0/ie/