User Tools

Site Tools


mds

Visual Analytics Course

Multi-dimensional scaling (MDS)

Now, I am assuming you came to this page after reading the PCA section, so you've heard of the Cars dataset. The PCA visualization is fine, except maybe that it projects the data along line axis – those axis that are given by the two largest eigenvalues of the covariance matrix. These axis are linear combination of the original axis, those associated with the original attributes in the data. But what if attributes do not correlate linearly? Should the projection necessarily obey a linear transform? THis is what we investigate here. Find a way to project data based on the relative distance between objects rather than from linear dependencies between variables (attributes).

MDS is a long studied problem originating from social sciences. The book by Borg and Groenen provides a good historical background on MDS.

The problem is easily stated. Given data items () and dissimilarity measures between these items, assign each of the data items a position in Euclidean space such that the Euclidean distances is as close as possible to the dissimilarities .

Although easily stated, the problem turns out to be quite complex. The book by Groenen and Borg offers an extensive discussion of the matrix solution to this problem. An early exposition of this method is due to Kruskal.

Kruskal, J. B. and M. Wish (1978). Multidimensional Scaling, Sage Publications.

The text we collect here is heavily inspired from Borg & Groenen, Chapter 8. Chapter 7 of the same book goes over all necessary matrix algebra basics we need.

A popular solution to this problem is to use an approach that is quite similar to the spring embedding approach for general graphs. We shall however look at a matrix approach which turns out to be quite effective on smaller datasets.

This time we are given a square matrix:

In the same manner, the goal we seek is not to preserve the inertia of the initial point cloud, but rather to preserve dissimilarities. Denote as the projection sending points from to (where we would like to be small compared to ). Also, write for the distance between the points and in the Euclidean space .

The stress function measure just how close we are to having a projection that satisfies all given dissimilarities:

Exercises / Assignments

  • Write down some python code computing the stress value assigned to a configuration .

The function is called raw stress as opposed to other variations such as normalized stress:

Observe that we could as well have normalized using the expression at the denominator. Actually, we could push these variations further (as did Basalaj in his PhD manuscript, section 3.2, page 22 ff.) and consider a stress function involving a parameter :

Basalaj then looks at the optimal value one can assign , which obviously relates to how well dissimilarities correlate to actual distances in . Intuitively, using a parameter makes sense since stress should not depend on scaling factors (zooming in or out in the drawing affects distances while dissimilarities remain the same).

  • Following Basalaj (have a look at section 3 of the PhD thesis), show that the optimal value involves Tucker's congruence coefficient between dissimilarities and distance .

A iterative majorization algorithm

We state the Borg-Groenen algorithm computing a MDS projection from dissimilarity data before giving details on how to compute its various elements).

  • Start from a random configuration of points . The algorithm shall compute a sequence of candidate projection .
  • First select a (usually small) precision parameter , to be used as a stop criteria, after two consecutive projection have stress values that are -close, that is, the iteration shall stop whenever .
  • Compute and set \sigma^{[0]} (this is only for convenience in order to bootstrap the algorithm).
  • Next, we iterate the following steps for a given number of iterations, or until we reach the stopping condition :
    • Increase by 1
    • Compute the Guttman transform (see below)
    • Compute
    • Set

Digging into the algorithm

Let's have a closer look at the stress function and write it down in a compact, useful, form. We have:

implicitly defining quantities , and . Observe that the embedding can be described as a matrix:

Recall also that we denote a line vector of this matrix as . We will also need to manipulate column vectors () of this matrix. Now, let's see how the different terms in the stress function can be expressed as matrix equations.

Expressing using matrix algebra

Let us first look at a term occurring in the computation of squared distances (those terms in ). We form the difference using the unit orthogonal column vectors (the unique vector having all zeros except 1 in position i), that is ; and similarly for . The difference we need to compute simply is:

so that the square of this quantity is equal to:

where we note that the product indeed i s a matrix with 1's at position and on the diagonal, and s at position and . (Compute an example to make sure you see the pattern.)

The squared distance may this be computed as a trace of a matrix (sum of its diagonal elements):

(Again, spell out the equation to make sure you see the trick.)

Now, we need to collect these squared distance for all pairs :

which implicitly defines the matrix .

Exercises

These exercises are meant to make sure you follow all the preceding computations.

  • Set and compute for all values . For instance, we have:
  • Compute the matrix product for each pair . Deduce that the trace is indeed equal to .
  • Write down the matrix .
  • Write down some python code that computes the vector product (so you can 'see' that it indeed produces a matrix). Explore the numpy library offering everything you need to perform matrix algebra.
  • Write down some python code that computes matrix . Explore the numpy library offering everything you need to perform matrix algebra.
  • Write down python code that computes matrix . Explore the numpy library offering everything you need to perform matrix algebra.

Majorizing

Let's now switch to and again try to express it using matrix algebra. THis time however we seek for a majorization (bounding above. This follows from the Cauchy-Schwartz inequality (a classical identity in real analysis):

Note that we have equality exactly when . Now, set and (remember from the algorithm stated above that stores previous values of ). The Cauchy-Schwartz inequality then reads:

(with equality if ) from which we obtain:

Now, we need to collect these terms going over all pairs anf form an expression as in the previous paragraph (when dealing with $\rho({\bf X}))):

where this time we have introduced values for and , while for and ; and .

Because equality occurs when , we have a majorization inequality:

The expression can thus be majorized by a linear function in .

Exercises / Assignments

  • Write down some python code that computes the , given any matrix .

Coming back to the algorithm

The previous computations leads to the inequality:

where the last term is just a convenient form for this rather long expression involving and .

Now, we seek to minimize stress so we minimize the right hand term of the last identity. Because this is quadratic function in , we simply need to compute its derivative (thanks to matrix algebra, this works just like with one-variable real functions). So that we want to equate:

so that . This would naturally solve into except that has no usual inverse. We are forced to use a sophisticated matrix algebra trick and go through the Moore-Penrose inverse where (the matrix is usually called the centering matrix).

The transform

is called the Guttman transform (of ). (Note that this is just the last identity we obtained where we substituted with .)

Exercises / Assignments

  • Write down some python code that computes matrix (given matrix ).
  • Write down some code that computes the pseudo-inverse .
  • Assemble all the code you have written along the previous exercise into the Borg-Groenen algorithm to compute an MDS projection starting from dissimilarities .

You might want to compare your solution to these two pieces of python code. The ''MDS'' class implements the basic algorithm described here. The ''GraphMDS'' class starts from a graph and builds a dissimilarity matrix using a list of properties.

Exploring the quality of a MDS projection

Exercises / Assignments

  1. Projecting high-dimensional space onto 2D necessarily distorts data. Aupetit and Lespinats compute a distorsion index and map onto visual cues in order to give users an idea of how much the data has been distorted. Investigate their technique (read the paper and implement their technique through a python script under Tulip).
/net/html/perso/melancon/Visual_Analytics_Course/data/pages/mds.txt · Last modified: 2012/11/20 12:43 by bpinaud