Now showing 1 - 5 of 5
  • Publication
    Bayesian Case-Exclusion and Explainable AI (XAI) for Sustainable Farming
    Smart agriculture (SmartAg) has emerged as a rich domain for AI-driven decision support systems (DSS); however, it is often challenged by user-adoption issues. This paper reports a case-based reasoning system, PBI-CBR, that predicts grass growth for dairy farmers, that combines predictive accuracy and explanations to improve user adoption. PBI-CBR’s key novelty is its use of Bayesian methods for case-base maintenance in a regression domain. Experiments report the tradeoff between predictive accuracy and explanatory capability for different variants of PBI-CBR, and how updating Bayesian priors each year improves performance.
      349
  • Publication
    Twin-Systems to Explain Artificial Neural Networks using Case-Based Reasoning: Comparative Tests of Feature-Weighting Methods in ANN-CBR Twins for XAI
    In this paper, twin-systems are described to address the eXplainable artificial intelligence (XAI) problem, where a black box model is mapped to a white box “twin” that is more interpretable, with both systems using the same dataset. The framework is instantiated by twinning an artificial neural network (ANN; black box) with a case-based reasoning system (CBR; white box), and mapping the feature weights from the former to the latter to find cases that explain the ANN’s outputs. Using a novel evaluation method, the effectiveness of this twin-system approach is demonstrated by showing that nearest neighbor cases can be found to match the ANN predictions for benchmark datasets. Several feature-weighting methods are competitively tested in two experiments, including our novel, contributions-based method (called COLE) that is found to perform best. The tests consider the ”twinning” of traditional multilayer perceptron (MLP) networks and convolutional neural networks (CNN) with CBR systems. For the CNNs trained on image data, qualitative evidence shows that cases provide plausible explanations for the CNN’s classifications.
    Scopus© Citations 62  1036
  • Publication
    How Case-Based Reasoning Explains Neural Networks: A Theoretical Analysis of XAI Using Post-Hoc Explanation-by-Example from a Survey of ANN-CBR Twin-Systems
    (Springer, 2019-08-09) ;
    This paper proposes a theoretical analysis of one approach to the eXplainable AI (XAI) problem, using post-hoc explanation-by-example, that relies on the twinning of artificial neural networks (ANNs) with case-based reasoning (CBR) systems; so-called ANN-CBR twins. It surveys these systems to advance a new theoretical interpretation of previous work and define a road map for CBR’s further role in XAI. A systematic survey of 1,102 papers was conducted to identify a fragmented literature on this topic and trace its influence to more recent work involving deep neural networks (DNNs). The twin-systems approach is advanced as one possible coherent, generic solution to the XAI problem. The paper concludes by road-mapping future directions for this XAI solution, considering (i) further tests of feature-weighting techniques, (ii) how explanatory cases might be deployed (e.g., in counterfactuals, a fortori cases), and (iii) the unwelcome, much-ignored issue of user evaluation.
      943Scopus© Citations 51
  • Publication
    The Twin-System Approach as One Generic Solution for XAI: An Overview of ANN-CBR Twins for Explaining Deep Learning
    The notion of twin-systems is proposed to address the eXplainable AI (XAI) problem, where an uninterpretable black-box system is mapped to a white-box “twin” that is more interpretable. In this short paper, we overview very recent work that advances a generic solution to the XAI problem, the so-called twin-system approach. The most popular twinning in the literature is that between an Artificial Neural Networks (ANN1) as a black box and Case Based Reasoning (CBR) system as a white-box, where the latter acts as an interpretable proxy for the former. We outline how recent work reviving this idea has applied it to deep learning methods. Furthermore, we detail the many fruitful directions in which this work may be taken; such as, determining the most (i) accurate feature-weighting methods to be used, (ii) appropriate deployments for explanatory cases, (iii) useful cases of explanatory value to users.
      274