Numerical Cruncher
Classification
Nearest Neighbour(s)
Nearest neighbour(s)-based classification methods (k-NN) are widely
used in Pattern Recognition due to their conceptual simplicity and ease
of implementation. Moreover, when the training set is huge enough,
they are close to Bayes' classifier. k-NN rules are powerful to
estimate probability densities and to classify unlabeled patterns.
There are different strategies to improve nearest-neighbour(s) classification methods performance:
-
Reducing the size of the reference set using edition and condensation methods.
Examples of these techniques are Wilson's edition algorithm, partition
edition, MultiEdit and Hart's condensation algorithm. All of them can be
tested in Numerical Cruncher ("Edit" menu).
-
Improving computational efficiency of the nearest neighbour(s) search.
Several algorithms have been proposed for this purpose. For example,
Fukunaga and Narendra's algorithm constructs a tree using K-MEANS
algorithm and, once the tree is constructed, explores it making use
of a branch and bound technique.