Options
Efficient model selection for probabilistic K nearest neighbour classification
File(s)
File | Description | Size | Format | |
---|---|---|---|---|
insight_publication.pdf | 256.36 KB |
Author(s)
Date Issued
03 February 2015
Date Available
03T15:38:18Z February 2017
Abstract
Probabilistic K-nearest neighbour (PKNN) classification has been introduced to improve the performance of the original K-nearest neighbour (KNN) classification algorithm by explicitly modelling uncertainty in the classification of each feature vector. However, an issue common to both KNN and PKNN is to select the optimal number of neighbours, K. The contribution of this paper is to incorporate the uncertainty in K into the decision making, and consequently to provide improved classification with Bayesian model averaging. Indeed the problem of assessing the uncertainty in K can be viewed as one of statistical model selection which is one of the most important technical issues in the statistics and machine learning domain. In this paper, we develop a new functional approximation algorithm to reconstruct the density of the model (order) without relying on time consuming Monte Carlo simulations. In addition, the algorithms avoid cross validation by adopting Bayesian framework. The performance of the proposed approaches is evaluated on several real experimental datasets.
Sponsorship
Science Foundation Ireland
Other Sponsorship
MKE (The Ministry of Knowledge Economy), Korea
Type of Material
Journal Article
Publisher
Elsevier
Journal
Neurocomputing
Volume
149
Issue
Part B
Start Page
1098
End Page
1108
Copyright (Published Version)
2014 Elsevier
Language
English
Status of Item
Peer reviewed
This item is made available under a Creative Commons License
Owning collection
Scopus© citations
12
Acquisition Date
Jan 28, 2023
Jan 28, 2023
Views
1408
Last Week
1
1
Last Month
17
17
Acquisition Date
Jan 28, 2023
Jan 28, 2023
Downloads
344
Last Week
7
7
Last Month
65
65
Acquisition Date
Jan 28, 2023
Jan 28, 2023