Options
Stacked-MLkNN: A stacking based improvement to Multi-Label k-Nearest Neighbours
Author(s)
Date Issued
2017-09-22
Date Available
2019-04-18T10:10:01Z
Abstract
Multi-label classification deals with problems where each datapoint can be assigned to more than one class, or label, at the same time. The simplest approach for such problems is to train independent binary classification models for each label and use these models to independently predict a set of relevant labels for a datapoint. MLkNN is an instance-based lazy learning algorithm for multi-label classification that takes this approach. MLkNN, and similar algorithms, however, do not exploit associations which may exist between the set of potential labels. These methods also suffer from imbalance in the frequency of labels in a training dataset. This work attempts to improve the predictions of MLkNN by implementing a two-layer stack-like method, Stacked-MLkNN which exploits the label associations. Experiments show that Stacked-MLkNN produces better predictions than MLkNN and several other state-of-the-art instance-based learning algorithms.
Sponsorship
Science Foundation Ireland
Type of Material
Conference Publication
Publisher
JMLR
Copyright (Published Version)
2017 the Authors
Dataset(s)
http://proceedings.mlr.press/v74/
Web versions
Language
English
Status of Item
Not peer reviewed
Part of
Proceedings of Machine Learning Research. Volume 74: First International Workshop on Learning with Imbalanced Domains: Theory and Applications, 22 September 2017, ECML-PKDD, Skopje, Macedonia
Conference Details
The 1st International Workshop on Learning with Imbalanced Domains: Theory and Applications (LIDTA 2017), Skopje, Macedonia, 18-22 September
This item is made available under a Creative Commons License
File(s)
Owning collection
Views
749
Acquisition Date
Mar 29, 2024
Mar 29, 2024
Downloads
151
Acquisition Date
Mar 29, 2024
Mar 29, 2024