Now showing 1 - 6 of 6
  • Publication
    Bayesian Case-Exclusion and Explainable AI (XAI) for Sustainable Farming
    Smart agriculture (SmartAg) has emerged as a rich domain for AI-driven decision support systems (DSS); however, it is often challenged by user-adoption issues. This paper reports a case-based reasoning system, PBI-CBR, that predicts grass growth for dairy farmers, that combines predictive accuracy and explanations to improve user adoption. PBI-CBR’s key novelty is its use of Bayesian methods for case-base maintenance in a regression domain. Experiments report the tradeoff between predictive accuracy and explanatory capability for different variants of PBI-CBR, and how updating Bayesian priors each year improves performance.
      168
  • Publication
    Explaining Artificial Neural Networks With Post-Hoc Explanation-By-Example
    (University College Dublin. School of Computer Science, 2022) ;
    0000-0001-5800-2525
    Explainable artificial intelligence (XAI) has become one of the most popular research areas in AI the past several years, with many workshops, conferences, and government/industry research programs now dedicated to the topic. Historically, one of the main avenues for this type of research was based around showing previous examples to explain or justify an automated prediction by an AI, and these explanations have seen a resurgence recently to help deal with the opaque nature of modern black-box deep learning systems because of how they mimic human reasoning. However, recent implementations of this explanation strategy do not abstract the black-box AI’s reasoning in a faithful way, or focus on important features used in a prediction. Moreover, generated synthetic examples shown are often lacking in plausibility. This thesis explores all these avenues both computationally and in user testing. The results demonstrate (1) a novel approach called twin-systems for computing nearest neighbour explanations which have the highest fidelity to the AI system it is explaining relative to other state-of-the-art methods, (2) the introduction of a novel XAI approach which focuses on specific “parts” of the explanations in twin-systems, (3) that these explanations have the effect of making misclassifications seem less incorrect in user testing, and (4) that other options aside from nearest neighbour explanations (e.g., semi-factuals) are valid options and deserve more attention in the scientific community.
      87
  • Publication
    How Case-Based Reasoning Explains Neural Networks: A Theoretical Analysis of XAI Using Post-Hoc Explanation-by-Example from a Survey of ANN-CBR Twin-Systems
    (Springer, 2019-08-09) ;
    This paper proposes a theoretical analysis of one approach to the eXplainable AI (XAI) problem, using post-hoc explanation-by-example, that relies on the twinning of artificial neural networks (ANNs) with case-based reasoning (CBR) systems; so-called ANN-CBR twins. It surveys these systems to advance a new theoretical interpretation of previous work and define a road map for CBR’s further role in XAI. A systematic survey of 1,102 papers was conducted to identify a fragmented literature on this topic and trace its influence to more recent work involving deep neural networks (DNNs). The twin-systems approach is advanced as one possible coherent, generic solution to the XAI problem. The paper concludes by road-mapping future directions for this XAI solution, considering (i) further tests of feature-weighting techniques, (ii) how explanatory cases might be deployed (e.g., in counterfactuals, a fortori cases), and (iii) the unwelcome, much-ignored issue of user evaluation.
      629Scopus© Citations 32
  • Publication
    The Twin-System Approach as One Generic Solution for XAI: An Overview of ANN-CBR Twins for Explaining Deep Learning
    The notion of twin-systems is proposed to address the eXplainable AI (XAI) problem, where an uninterpretable black-box system is mapped to a white-box “twin” that is more interpretable. In this short paper, we overview very recent work that advances a generic solution to the XAI problem, the so-called twin-system approach. The most popular twinning in the literature is that between an Artificial Neural Networks (ANN1) as a black box and Case Based Reasoning (CBR) system as a white-box, where the latter acts as an interpretable proxy for the former. We outline how recent work reviving this idea has applied it to deep learning methods. Furthermore, we detail the many fruitful directions in which this work may be taken; such as, determining the most (i) accurate feature-weighting methods to be used, (ii) appropriate deployments for explanatory cases, (iii) useful cases of explanatory value to users.
      183
  • Publication
    Twin-Systems to Explain Artificial Neural Networks using Case-Based Reasoning: Comparative Tests of Feature-Weighting Methods in ANN-CBR Twins for XAI
    In this paper, twin-systems are described to address the eXplainable artificial intelligence (XAI) problem, where a black box model is mapped to a white box “twin” that is more interpretable, with both systems using the same dataset. The framework is instantiated by twinning an artificial neural network (ANN; black box) with a case-based reasoning system (CBR; white box), and mapping the feature weights from the former to the latter to find cases that explain the ANN’s outputs. Using a novel evaluation method, the effectiveness of this twin-system approach is demonstrated by showing that nearest neighbor cases can be found to match the ANN predictions for benchmark datasets. Several feature-weighting methods are competitively tested in two experiments, including our novel, contributions-based method (called COLE) that is found to perform best. The tests consider the ”twinning” of traditional multilayer perceptron (MLP) networks and convolutional neural networks (CNN) with CBR systems. For the CNNs trained on image data, qualitative evidence shows that cases provide plausible explanations for the CNN’s classifications.
      885Scopus© Citations 39