Advances in Instance selection for Instance-Based Learning Algorithms

H. Brighton, Christopher Stuart Mellish

Research output: Contribution to journalArticle

371 Citations (Scopus)

Abstract

The basic nearest neighbour classifier suffers from the indiscriminate storage of all presented training instances. With a large database of instances classification response time can be slow. When noisy instances are present classification accuracy can suffer. Drawing on the large body of relevant work carried out in the past 30 years, we review the principle approaches to solving these problems. By deleting instances, both problems can be alleviated, but the criterion used is typically assumed to be all encompassing and effective over many domains. We argue against this position and introduce an algorithm that rivals the most successful existing algorithm. When evaluated on 30 different problems, neither algorithm consistently outperforms the other: consistency is very hard. To achieve the best results, we need to develop mechanisms that provide insights into the structure of class definitions. We discuss the possibility of these mechanisms and propose some initial measures that could be useful for the data miner.

Original languageEnglish
Pages (from-to)153-172
Number of pages19
JournalData Mining and Knowledge Discovery
Volume6
Issue number2
DOIs
Publication statusPublished - Apr 2002

Keywords

  • instance-based learning
  • instance selection
  • forgetting
  • pruning
  • NEAREST-NEIGHBOR RULE
  • CLASSIFICATION

Cite this

Advances in Instance selection for Instance-Based Learning Algorithms. / Brighton, H.; Mellish, Christopher Stuart.

In: Data Mining and Knowledge Discovery, Vol. 6, No. 2, 04.2002, p. 153-172.

Research output: Contribution to journalArticle

Brighton, H. ; Mellish, Christopher Stuart. / Advances in Instance selection for Instance-Based Learning Algorithms. In: Data Mining and Knowledge Discovery. 2002 ; Vol. 6, No. 2. pp. 153-172.
@article{49e7579ffde84968928e8bbe21df4652,
title = "Advances in Instance selection for Instance-Based Learning Algorithms",
abstract = "The basic nearest neighbour classifier suffers from the indiscriminate storage of all presented training instances. With a large database of instances classification response time can be slow. When noisy instances are present classification accuracy can suffer. Drawing on the large body of relevant work carried out in the past 30 years, we review the principle approaches to solving these problems. By deleting instances, both problems can be alleviated, but the criterion used is typically assumed to be all encompassing and effective over many domains. We argue against this position and introduce an algorithm that rivals the most successful existing algorithm. When evaluated on 30 different problems, neither algorithm consistently outperforms the other: consistency is very hard. To achieve the best results, we need to develop mechanisms that provide insights into the structure of class definitions. We discuss the possibility of these mechanisms and propose some initial measures that could be useful for the data miner.",
keywords = "instance-based learning, instance selection, forgetting, pruning, NEAREST-NEIGHBOR RULE, CLASSIFICATION",
author = "H. Brighton and Mellish, {Christopher Stuart}",
year = "2002",
month = "4",
doi = "10.1023/A:1014043630878",
language = "English",
volume = "6",
pages = "153--172",
journal = "Data Mining and Knowledge Discovery",
issn = "1384-5810",
publisher = "Springer Netherlands",
number = "2",

}

TY - JOUR

T1 - Advances in Instance selection for Instance-Based Learning Algorithms

AU - Brighton, H.

AU - Mellish, Christopher Stuart

PY - 2002/4

Y1 - 2002/4

N2 - The basic nearest neighbour classifier suffers from the indiscriminate storage of all presented training instances. With a large database of instances classification response time can be slow. When noisy instances are present classification accuracy can suffer. Drawing on the large body of relevant work carried out in the past 30 years, we review the principle approaches to solving these problems. By deleting instances, both problems can be alleviated, but the criterion used is typically assumed to be all encompassing and effective over many domains. We argue against this position and introduce an algorithm that rivals the most successful existing algorithm. When evaluated on 30 different problems, neither algorithm consistently outperforms the other: consistency is very hard. To achieve the best results, we need to develop mechanisms that provide insights into the structure of class definitions. We discuss the possibility of these mechanisms and propose some initial measures that could be useful for the data miner.

AB - The basic nearest neighbour classifier suffers from the indiscriminate storage of all presented training instances. With a large database of instances classification response time can be slow. When noisy instances are present classification accuracy can suffer. Drawing on the large body of relevant work carried out in the past 30 years, we review the principle approaches to solving these problems. By deleting instances, both problems can be alleviated, but the criterion used is typically assumed to be all encompassing and effective over many domains. We argue against this position and introduce an algorithm that rivals the most successful existing algorithm. When evaluated on 30 different problems, neither algorithm consistently outperforms the other: consistency is very hard. To achieve the best results, we need to develop mechanisms that provide insights into the structure of class definitions. We discuss the possibility of these mechanisms and propose some initial measures that could be useful for the data miner.

KW - instance-based learning

KW - instance selection

KW - forgetting

KW - pruning

KW - NEAREST-NEIGHBOR RULE

KW - CLASSIFICATION

U2 - 10.1023/A:1014043630878

DO - 10.1023/A:1014043630878

M3 - Article

VL - 6

SP - 153

EP - 172

JO - Data Mining and Knowledge Discovery

JF - Data Mining and Knowledge Discovery

SN - 1384-5810

IS - 2

ER -