Options
Improving multi-label classification using inter-label associations and a new Kalman filter based ensemble method
Author(s)
Date Issued
2020
Date Available
2020-11-04T08:58:47Z
Abstract
In machine learning, classification algorithms are used to train models to recognise the class, or category, that an object belongs to. Most classification problems are multi-class, in which one object can belong to at most one class. However, there are many important real-world problems in which an object can belong to more than one class simultaneously. These are known as multi-label classification problems as an object can be labelled with more than one class. The multi-label classification algorithms in the literature range from very simple approaches, such as binary relevance, in which independent binary classifiers are built for each label, to sophisticated ensemble techniques, such as classifier chains, that build collections of interconnected classifiers. The most effective approaches tend to explicitly exploit relationships between the labels themselves, inter-label associations, and use ensembles. There is an opportunity, however, to more explicitly take advantage of inter-label associations and to use ensembling techniques that are more sophisticated than the bagging-based approaches that dominate the multi-label classification literature. There are several multi-label classification algorithms in the literature. The most basic methods are binary relevance and label powerset. Binary relevance considers each label as independent binary classification task and learns binary classifier models. Label powerset converts unique label assignment combinations to unique classes and then trains a multi-class classifier model. Although there are other methods which can benefit by considering the inter-label associations or through ensemble algorithms. Ensemble methods in multi-class domain generally perform much better than the individual classifier models. Although, except bagging like methods, there are not much work done in multi-label on boosting or boosting-like methods. This thesis investigates new algorithms for training multi-label classification models that exploit inter-label associations, and/or utilise ensemble models (especially boosting-like methods). Three new methods are proposed: Stacked-MLkNN, a stacked-ensemble-based lazy learning algorithm that exploits inter-label associations at the stacked layer; CascadeML, a neural network training algorithm that uses a cascade architecture to exploit inter-label associations and evolves the network architecture during training which minimises the requirement for hyperparameter tuning; and KFHE-HOMER, a multi-label ensemble training algorithm built using a newly proposed perspective on ensemble training that views it as a static state estimation problem that can be solved using the sensor fusion properties of the Kalman filter. This new perspective on ensemble training is also a contribution of this thesis, as are two new multi-class classification algorithms--- Kalman Filter-based Heuristic Ensemble (KFHE) and KalmanTune---that exploit it. Each newly proposed method is extensively evaluated across a set of well-known benchmark multi-label classification datasets, and compared to the performance of current state-of-the art methods. Each newly proposed method is found to be highly effective. Stacked-MLkNN performs better than all other existing instance-based multi-label classification algorithms against which it was compared. CascadeML can create models with comparable performance to the best performing multi-label methods, without requiring extensive hyperparameter tuning. KFHE outperforms leading multi-class ensemble methods, and KalmanTune can improve the performance of ensembles trained using boosting. Finally, KFHE-HOMER was found to perform better than all other multi-label classification methods against which it was compared.
Type of Material
Doctoral Thesis
Publisher
University College Dublin. School of Computer Science
Qualification Name
Ph.D.
Copyright (Published Version)
2020 the Author
Language
English
Status of Item
Peer reviewed
This item is made available under a Creative Commons License
File(s)
No Thumbnail Available
Name
7892871.pdf
Size
3.13 MB
Format
Adobe PDF
Checksum (MD5)
f1a2347f681c95d4290093a5167a631d
Owning collection