Options
NEAR: A Partner to Explain Any Factorised Recommender System
File(s)
File | Description | Size | Format | |
---|---|---|---|---|
insight_publication.pdf | 645.13 KB |
Author(s)
Date Issued
12 June 2019
Date Available
08T10:37:58Z July 2019
Abstract
Many explainable recommender systems construct explanations of the recommendations these models produce, but it continues to be a difficult problem to explain to a user why an item was recommended by these high-dimensional latent factor models. In this work, We propose a technique that joint interpretations into recommendation training to make accurate predictions while at the same time learning to produce recommendations which have the most explanatory utility to the user. Our evaluation shows that we can jointly learn to make accurate and meaningful explanations with only a small sacrifice in recommendation accuracy. We also develop a new algorithm to measure explanation fidelity for the interpretation of top-n rankings. We prove that our approach can form the basis of a universal approach to explanation generation in recommender systems.
Sponsorship
Science Foundation Ireland
Other Sponsorship
Insight Research Centre
Type of Material
Conference Publication
Publisher
ACM
Start Page
247
End Page
249
Copyright (Published Version)
2019 the Authors
Web versions
Language
English
Status of Item
Peer reviewed
Part of
UMAP'19 Adjunct Adjunct Publication of the 27th Conference on User Modeling, Adaptation and Personalization
Description
UMAP'19: 27th Conference on User Modeling, Adaptation and Personalization, Larnaca, Cyprus, 9-12 June 2019
ISBN
978-1-4503-6711-0
This item is made available under a Creative Commons License
Owning collection
Scopus© citations
0
Acquisition Date
Feb 5, 2023
Feb 5, 2023
Views
623
Last Week
1
1
Last Month
1
1
Acquisition Date
Feb 5, 2023
Feb 5, 2023
Downloads
229
Last Week
3
3
Last Month
10
10
Acquisition Date
Feb 5, 2023
Feb 5, 2023