LVQ
Encyclopedia
In Computer Science
Computer science
Computer science or computing science is the study of the theoretical foundations of information and computation and of practical techniques for their implementation and application in computer systems...

, Learning Vector Quantization (LVQ), is a prototype-based
Prototype
A prototype is an early sample or model built to test a concept or process or to act as a thing to be replicated or learned from.The word prototype derives from the Greek πρωτότυπον , "primitive form", neutral of πρωτότυπος , "original, primitive", from πρῶτος , "first" and τύπος ,...

 supervised
Supervised learning
Supervised learning is the machine learning task of inferring a function from supervised training data. The training data consist of a set of training examples. In supervised learning, each example is a pair consisting of an input object and a desired output value...

 classification algorithm
Algorithm
In mathematics and computer science, an algorithm is an effective method expressed as a finite list of well-defined instructions for calculating a function. Algorithms are used for calculation, data processing, and automated reasoning...

. LVQ is the supervised counterpart of vector quantization
Vector quantization
Vector quantization is a classical quantization technique from signal processing which allows the modeling of probability density functions by the distribution of prototype vectors. It was originally used for data compression. It works by dividing a large set of points into groups having...

 systems.

Overview

LVQ can be understood as a special case of an artificial neural network
Artificial neural network
An artificial neural network , usually called neural network , is a mathematical model or computational model that is inspired by the structure and/or functional aspects of biological neural networks. A neural network consists of an interconnected group of artificial neurons, and it processes...

, more precisely, it applies a winner-take-all
Winner-take-all
Winner-take-all is a computational principle applied in computational models of neural networks by which neurons in a layer compete with each others for activation...

 Hebbian learning-based approach. It is a precursor to Self-organizing map
Self-organizing map
A self-organizing map or self-organizing feature map is a type of artificial neural network that is trained using unsupervised learning to produce a low-dimensional , discretized representation of the input space of the training samples, called a map...

s (SOM) and related to Neural gas
Neural gas
Neural gas is an artificial neural network, inspired by the self-organizing map and introduced in 1991 by Thomas Martinetz and Klaus Schulten. The neural gas is a simple algorithm for finding optimal data representations based on feature vectors...

, and to the k-Nearest Neighbor algorithm
K-nearest neighbor algorithm
In pattern recognition, the k-nearest neighbor algorithm is a method for classifying objects based on closest training examples in the feature space. k-NN is a type of instance-based learning, or lazy learning where the function is only approximated locally and all computation is deferred until...

 (k-NN). LVQ was invented by Teuvo Kohonen
Teuvo Kohonen
Teuvo Kohonen, Dr. Ing , is a Finnish academician and prominent researcher. He is currently professor emeritus of the Academy of Finland.Prof...

.

An LVQ system is represented by prototypes W=(w(i),...,w(n)) which are defined in the feature space
Feature space
In pattern recognition a feature space is an abstract space where each pattern sample is represented as a point in n-dimensional space. Its dimension is determined by the number of features used to describe the patterns...

 of observed data. In
winner-take-all
Winner-take-all
Winner-take-all is a computational principle applied in computational models of neural networks by which neurons in a layer compete with each others for activation...

training algorithms one determines, for each data point, the prototype which is closest to the input according to a given distance measure. The position of this so-called winner prototype is then adapted, i.e. the winner is moved closer if it correctly classifies the data point or moved away if it classifies the data point incorrectly.

An advantage of LVQ is that it creates prototypes that are easy to interpret for experts in the respective application domain.
LVQ systems can be applied to multi-class classification problems in a natural way.
It is used in a variety of practical applications, see http://liinwww.ira.uka.de/bibliography/Neural/SOM.LVQ.html
for an extensive bibliography.

A key issue in LVQ is the choice of an appropriate measure of distance or similarity for training and classification. Recently, techniques have been developed which adapt a parameterized distance measure in the course of training the system, see e.g. (Schneider, Biehl, and Hammer, 2009) and references therein.

LVQ can be a source of great help in classifying text documents.

External links

  • LVQ for WEKA: Implementation of LVQ variants (LVQ1, OLVQ1, LVQ2.1, LVQ3, OLVQ3) for the WEKA Machine Learning Workbench.
  • lvq_pak official release (1996) by Kohonen and his team
  • LVQ for WEKA: Another implementation of LVQ for the WEKA Machine Learning Workbench.
The source of this article is wikipedia, the free encyclopedia.  The text of this article is licensed under the GFDL.
 
x
OK