Dimensionality Reduction Through Sub-Space Mapping for Nearest Neighbour Algorithms

T R Payne, P Edwards

Research output: Chapter in Book/Report/Conference proceedingPublished conference contribution

5 Citations (Scopus)
5 Downloads (Pure)

Abstract

Many learning algorithms make an implicit assumption that all the attributes present in the data are relevant to a learning task. However, several studies have demonstrated that this assumption rarely holds; for many supervised learning algorithms, the inclusion of irrelevant or redundant attributes can result in a degradation in classification accuracy. While a variety of different methods for dimensionality reduction exist, many of these are only appropriate for datasets which contain a small number of attributes (e.g. < 20). This paper presents an alternative approach to dimensionality reduction, and demonstrates how it can be combined with a Nearest Neighbour learning algorithm. We present an empirical evaluation of this approach, and contrast its performance with two related techniques; a Monte-Carlo wrapper and an Information Gain-based filter approach.

Original languageEnglish
Title of host publicationMachine Learning: ECML 2000
Subtitle of host publication11th European Conference on Machine Learning Barcelona, Catalonia, Spain, May 31 – June 2, 2000 Proceedings
EditorsRamon López de Mantaras, Enric Plaza
Place of PublicationBerlin, Germany
PublisherSpringer-Verlag
Pages331-343
Number of pages13
Volume1810
ISBN (Electronic)978-3-540-45164-8
ISBN (Print)978-3-540-67602-7
DOIs
Publication statusPublished - 2000

Publication series

NameLecture Notes in Artificial Intelligence
PublisherSpringer

Keywords

  • machine learning
  • feature selection
  • nearest-neighbour

Fingerprint

Dive into the research topics of 'Dimensionality Reduction Through Sub-Space Mapping for Nearest Neighbour Algorithms'. Together they form a unique fingerprint.

Cite this