Skip to content

Latest commit

 

History

History
105 lines (85 loc) · 5.56 KB

rslvq.rst

File metadata and controls

105 lines (85 loc) · 5.56 KB
Robust Soft Learning Vector Quantization
A RSLVQ model can be constructed by initializing RslvqModel with the
desired hyper-parameters, e.g. the number of prototypes, and the initial
positions of the prototypes and then calling the RslvqModel.fit function with the
input data. The resulting model will contain the learned prototype
positions and prototype labels which can be retrieved as properties w_
and c_w_. Classifications of new data can be made via the predict
function, which computes the Euclidean distances of the input data to

all prototypes and returns the label of the respective closest prototypes.

Placing the prototypes is done by optimizing the following cost
function, called the Robust Soft Learning Vector Quantization (RSLVQ)

cost function:

$\displaystyle E_{\mathrm{RSLVQ}} = \sum_{i=1}^l \mathrm{log}\left(\frac{p(x_i,y_i|W)}{p(x_i|W)}\right)$

where p(xi, yi|W) is the probability density that xi is generated by a
mixture component of the correct class yi and p(xi|W) is the total

probability density of xi.

The optimization is performed via a limited-memory version of the
Broyden-Fletcher-Goldfarb-Shanno algorithm. Regarding runtime, the cost
function can be computed in linear time with respect to the data points:
For each data point, we need to compute the distances to all prototypes,
compute the fraction (di+ − di)/(di+ + di) and then sum up all
these fractions, the same goes for the derivative. Thus, GLVQ scales

linearly with the number of data points.

Matrix Robust Soft Learning Vector Quantization (MRSLQV)

Matrix Robust Soft Learning Vector Quantization (MRSLQV) generalizes over RSLVQ by learning a full linear transformation matrix Ω to support classification. The matrix product ΩT ⋅ Ω is called the positive semi-definite relevance matrix Λ. Interpreted this way, MRSLQV is a metric learning algorithm. It is also possible to initialize the MrslvqModel by setting the dim parameter to an integer less than the data dimensionality, in which case Ω will have only dim rows, performing an implicit dimensionality reduction. This variant is called Limited Rank Matrix LVQ or LiRaM-LVQ. After initializing the MrslvqModel and calling the fit function on your data set, the learned Ω matrix can be retrieved via the attribute omega_.

The following figure shows how MRSLVQ classifies some example data after training. The blue dots show represent the prototype. The yellow and purple dots are the data points. The bigger transparent circle represent the target value and the smaller circle the predicted target value. The right side plot shows the data and prototypes multiplied with the learned Ω matrix. As can be seen, MRSLVQ effectively projects the data onto a one-dimensinal line such that both classes are well distinguished.

image

References:

Local Matrix Robust Soft Learning Vector Quantization (LMRSLVQ)

LmrslvqModel extends RSLVQ by giving each prototype/class relevances for each feature. This way LMRSLVQ is able to project the data for better classification.

Especially in multi-class data sets, the ideal projection Ω may be different for each class, or even each prototype. Localized Matrix Robust Soft Learning Vector Quantization (LGMLVQ) accounts for this locality dependence by learning an individual Ωk for each prototype k. As with MRSLVQ, the rank of Ω can be bounded by using the dim parameter. After initializing the LmrslvqModel and calling the fit function on your data set, the learned Ωk matrices can be retrieved via the attribute omegas_.

The following figure shows how LMRSLVQ classifies some example data after training. The blue dots show represent the prototype. The yellow and purple dots are the data points. The bigger transparent circle represent the target value and the smaller circle the predicted target value. The plot in the middle and on the right show the data and prototypes after multiplication with the Ω1 and Ω2 matrix respectively. As can be seen, both prototypes project the data onto one dimension, but they choose orthogonal projection dimensions, such that the data of the respective own class is close while the other class gets dispersed, thereby enhancing classification accuracy. A MrslvqModel can not solve this classification problem, because no global Ω can enhance the classification significantly.

image

References: