Now showing 1 - 5 of 5
  • Publication
    Great Explanations: Opinionated Explanations for Recommendation
    Explaining recommendations helps users to make better, more satisfying decisions. We describe a novel approach to explanation for recommender systems, one that drives the recommendation process, while at the same time providing the user with useful insights into the reason why items have been chosen and the trade-os they may need to consider when making their choice. We describe this approach in the context ofa case-based recommender system that harnesses opinions mined from user-generated reviews, and evaluate it on TripAdvisor Hotel data.
      597Scopus© Citations 36
  • Publication
    Automatic Generation of Natural Language Explanations
    An interesting challenge for explainable recommender systems is to provide successful interpretation of recommendations using structured sentences. It is well known that user-generated reviews, have strong influence on the users’ decision. Recent techniques exploit user reviews to generate natural language explanations. In this paper, we propose a character-level attention-enhanced long short-term memory model to generate natural language explanations. We empirically evaluated this network using two real-world review datasets. The generated text present readable and similar to a real user’s writing, due to the ability of reproducing negation, misspellings, and domain-specific vocabulary.
      398Scopus© Citations 54
  • Publication
    NEAR: A Partner to Explain Any Factorised Recommender System
    Many explainable recommender systems construct explanations of the recommendations these models produce, but it continues to be a difficult problem to explain to a user why an item was recommended by these high-dimensional latent factor models. In this work, We propose a technique that joint interpretations into recommendation training to make accurate predictions while at the same time learning to produce recommendations which have the most explanatory utility to the user. Our evaluation shows that we can jointly learn to make accurate and meaningful explanations with only a small sacrifice in recommendation accuracy. We also develop a new algorithm to measure explanation fidelity for the interpretation of top-n rankings. We prove that our approach can form the basis of a universal approach to explanation generation in recommender systems.
      388
  • Publication
    A Multi-Domain Analysis of Explanation-Based Recommendation using User-Generated Reviews
    (AAAI Publications, 2018-05-23) ; ;
    This paper extends recent work on the use of explanations in recommender systems. In particular, we show how explanations can be used to rank as well as justify recommendations, then we compare the results to more conventional recommendation approaches, in three large-scale application domains.
      174
  • Publication
    On the Use of Opinionated Explanations to Rank and Justify Recommendations
    (Association for the Advancement of Artificial Intelligence, 2016-05-18) ; ;
    Explanations are an important part of modern recommendersystems. They help users to make better decisions, improvethe conversion rate of browsers into buyers, and lead togreater user satisfaction in the long-run. In this paper, we extendrecent work on generating explanations by mining userreviews. We show how this leads to a novel explanation formatthat can be tailored for the needs of the individual user.Moreover, we demonstrate how the explanations themselvescan be used to rank recommendations so that items which canbe associated with a more compelling explanation are rankedahead of items that have a less compelling explanation. Weevaluate our approach using a large-scale, real-world TripAdvisordataset.
      186