Interpretable Time Series Classification using Linear Models and Multi-resolution Multi-domain Symbolic Representations

Files in This Item:
Access to this item has been restricted by the copyright holder until:2020-05-21
File Description SizeFormat 
insight_publication.pdf1.35 MBAdobe PDFDownload    Request a copy
Title: Interpretable Time Series Classification using Linear Models and Multi-resolution Multi-domain Symbolic Representations
Authors: Nguyen, Thach Le
Gsponer, Severin
Ilue, Iulia
O'Reilly, Martin
Ifrim, Georgiana
Permanent link: http://hdl.handle.net/10197/10782
Date: 21-May-2019
Online since: 2019-06-11T06:42:22Z
Abstract: The time series classification literature has expanded rapidly over the last decade, with many new classification approaches published each year. Prior research has mostly focused on improving the accuracy and efficiency of classifiers, with interpretability being somewhat neglected. This aspect of classifiers has become critical for many application domains and the introduction of the EU GDPR legislation in 2018 is likely to further emphasize the importance of interpretable learning algorithms. Currently, state-of-the-art classification accuracy is achieved with very complex models based on large ensembles (COTE) or deep neural networks (FCN). These approaches are not efficient with regard to either time or space, are difficult to interpret and cannot be applied to variable-length time series, requiring pre-processing of the original series to a set fixed-length. In this paper we propose new time series classification algorithms to address these gaps. Our approach is based on symbolic representations of time series, efficient sequence mining algorithms and linear classification models. Our linear models are as accurate as deep learning models but are more efficient regarding running time and memory, can work with variable-length time series and can be interpreted by highlighting the discriminative symbolic features on the original time series. We advance the state-of-the-art in time series classification by proposing new algorithms built using the following three key ideas: (1) Multiple resolutions of symbolic representations: we combine symbolic representations obtained using different parameters, rather than one fixed representation (e.g., multiple SAX representations); (2) Multiple domain representations: we combine symbolic representations in time (e.g., SAX) and frequency (e.g., SFA) domains, to be more robust across problem types; (3) Efficient navigation in a huge symbolic-words space: we extend a symbolic sequence classifier (SEQL) to work with multiple symbolic representations and use its greedy feature selection strategy to effectively filter the best features for each representation. We show that our multi-resolution multi-domain linear classifier (mtSS-SEQL+LR) achieves a similar accuracy to the state-of-the-art COTE ensemble, and to recent deep learning methods (FCN, ResNet), but uses a fraction of the time and memory required by either COTE or deep models. To further analyse the interpretability of our classifier, we present a case study on a human motion dataset collected by the authors. We discuss the accuracy, efficiency and interpretability of our proposed algorithms and release all the results, source code and data to encourage reproducibility.
Funding Details: Science Foundation Ireland
Type of material: Journal Article
Publisher: Springer
Journal: Data Mining and Knowledge Discovery
Start page: 1
End page: 40
Copyright (published version): 2019 the Authors
Keywords: Time series classificationMulti-resolutionMulti-domain symbolic representationsSAXSFASEQLLinear ModelsInterpretable Classifier
DOI: 10.1007/s10618-019-00633-3
Language: en
Status of Item: Peer reviewed
Appears in Collections:Insight Research Collection

Show full item record

Google ScholarTM

Check

Altmetric


This item is available under the Attribution-NonCommercial-NoDerivs 3.0 Ireland. No item may be reproduced for commercial purposes. For other possible restrictions on use please refer to the publisher's URL where this is made available, or to notes contained in the item itself. Other terms may apply.