# scikit-learn/scikit-learn

Added paragraphs on metric and nonmetric MDS, explaining the difference

 @@ -347,19 +347,46 @@ The overall complexity of standard LTSA is Multi-dimensional Scaling (MDS) =============================== -Multidimensional scaling is a technique used for analysis similarity or +Multidimensional scaling is a technique used for analyzing similarity or dissimilarity data. MDS attempts to model similarity or dissimilarity data as distances in a geometric spaces. The data can be ratings of similarity between objects, interaction frequencies of molecules, or trade indices between countries. There exists two types of MDS algorithm: metric and non metric. In Metric MDS, -the distances between two points are set to be as closed as possible as the -similarity or dissimilarity data. In non metric, the algorithms will try to -preserve the order of the distances, and hence seek for a monotonic +the distances between two points are set to be as close as possible as the +similarity or dissimilarity data. In non metric vision, the algorithms will +try to preserve the order of the distances, and hence seek for a monotonic relationship between the distances in the embedded space and the similarities/dissimilarities. +Let :math:S be the similarity matrix, and :math:X the coordinates of the +:math:n input points. Disparities :math:\hat{d}_{ij} are transformation of +the similarities chosen in some optimal ways. +The stress is then defined by :math:sum_{i < j} d_{ij}(X) - \hat{d}_{ij}(X)} + +Metric MDS +---------- + +The simplest metric MDS model, called absolute MDS, disparities are defined by +:math:\hat{d}_{ij} = S_{ij}. With absolute MDS, the value :math:S_{ij} +should then correspond exactly to the distance between point :math:i and +:math:j in the embedding point. + +Most commonly, disparities are set to :math:\hat{d}_ij = b S_{ij}. + +Nonmetric MDS +------------- + +Non metric MDS focuses on the ordination of the data. If :math:S_{ij} < +S_{kl}, then the embedding should enforce :math:d_{ij} < d_{jk}. A simple +algorithm to enforce that is to use a monotonic regression is computed of +:math:d_{ij} on :math:S_{ij}, yielding disparities :math:\hat{d}_{ij} in +the same order as S_{ij}. +A trivial solution to this problem is to set all the points on the origin. In +order to avoid that, the disparities :math:\hat{d}_ij are normalized. + + .. figure:: ../auto_examples/manifold/images/plot_mds.png :target: ../auto_examples/manifold/plot_mds.html :align: center