User Tools

Site Tools


pca

Visual Analytics Course

PCA (Principal Components Analysis)

This is a widely known and used data analysis method. Mainly because it is mathematically sound (and useful), and because the results it provides have a clear mathematical interpretation (the interpretation may be less clear when you try to translate it into the actual context). Take time to read the first chapter of Lebart et al.'s book.. This section of the course is heavily inspired from this book/chapter.

Preamble

Let us first consider the example of the Cars dataset (from Ward et al. book). The data lists a series of car models, each with its main attributes: motor engine, miles per gallon, etc.

Vehicle Name Retail Price Dealer Cost Engine Size (l) Cyl HP City MPG Hwy MPG Weight Wheel Base Len Width
Acura 3.5 RL 4dr 43755 39014 3,5 6 225 18 24 3880 115 197 72
Audi A4 1.8T 4dr 25940 23508 1,8 4 170 22 31 3252 104 179 70
Chevrolet Cavalier 2dr 14610 13697 2,2 4 140 26 37 2617 104 183 69

There are a number of questions this dataset can help answer. Is the price of a car solely based on engine size? how much does it depend on fuel consumption (city or highway)? Do cars group into clear categories, and if so what seems to be the main criteria?

An hypothesis we may make is that the set of attributes we have on cars should help us compare and sort them out. Looking at the values of attributes we may judge how much a car is similar to another. In more mathematical terms, we may even see this as a 'distance' between cars. Since we are talking about 'distance', how feasible is it to display cars on a screen and lay them out so that comparable cars sit close to one another?

This is the question we now tackel. Starting from a dataset like the one we have for cars, we wish to produce a map, faithful enough so we can rely on the visualization to mine the data and find answers to our questions. Now, the cars dataset is just a high-dimensional dataset: each car comes equipped with a value along 19 axis (the 19 attributes that were collected). The problem is how do you visualize sich a high-dimensional dataset?

Now, any high-dimensional dataset can be stored in tabular form, in a array.

Each line then correspond to a -dimensional vector and the set of elements thus define a cloud of points in dimensional space. (The same goes with columns defining a cloud of points in -dimensional space.) As far as visualization is concerned, the problem we have is to try to give a as good as possible view in 2D of a -dimensional set of points. That's where PCA comes into play.

Now, when looking at how a cloud of points is structured or organized, its inertia reveals to be an important quantity. The inertia is defined as the sum of squared distances for all pairs of points

where is the Euclidean distance between two points (that is, . Here, and refer to the respective weights of elements and , satisfying . Elements can indeed have weights (some elements may be considered more important than others), or have equal weights .

Exercises / Assignments

  1. Show that computing inertia is equivalent to computing the sum of all weighted squared distance to the center of gravity of the cloud of points, where (we may see this as taking the weighted geometric average of all points in ). More precisely, show that . This is mainly due the fact that the squared Euclidean distance can be computed using the scalar product . Simply expand each of the two expressions and observe the result.

To sum up the preceding ideas: preserving the structure of the point cloud amounts to preserving distances between points which we turn into the preservation of inertia (the sum of all squared distances). The result of the exercise show that this is equivalent to preserving the sum of squared distances to the center of gravity.

A simpler form of the problem

Impression A first idea is to figure out which line best fit the data (seen as a cloud of points in high dimension). That is, we seek for a line in -dimensional space that would go through the point cloud and is a best fit (among all possible lines). What is a best fit here? It's a line that is as closest as possible to all points: the one which minimizes the overall distance to all points. That is, the sum of all squared distance between all points and the line is minimum.

Let's look at an even simpler problem. Impression

Let's try to find a line going through the origin that satisfies our goal (maximum squared distances between all pairs of points). Now, let's call the unit vector determining the line we seek to find and write it down as a sum of unit vectors (the usual decomposition along the axes) . Now, any other vector can be projected along . The length of the projection of a vector on the line determined by can be computed as a scalar product . Note also that this value coincides with where is the angle formed by the two vectors. The vector column collecting all of these values can be simply written down using matrix notation where is now seen as a matrix.

Now, we want to minimize the sum of distances from points to the line determined by , which is equivalent to maximizing the sum of all squared distances between projections of points on the line determined by .

Why? Pythagore ! Indeed, because the distance from the origin to the point is fixed, minimizing the distance to the line is equivalent to maximizing the length of the projection of the points on the line. More precisely,

In other words, we want to maximize the quantity:

subject to ( mist be a unit vector). Writing this using matrix notation, this quantity is equivalent to find a unit vector maximizing:

(where the operator is the transpose of a matrix).

A well know result (we won't prove here – see Lebart's book) shows that the quantity , under the constraint (the vector is of unit length) is maximum exactly when is an eigenvector of the matrix associated with the largest eigenvalue of .

Now, what we have found is the best one-dimensional space on which the point cloud may be projected. That is, the line induced form is such that the overall variance of the projected points is maximum (among all possible projections we could compute).

In order to get a two-dimensional projection (which is more 'visual'), we need to compute a second best vector. It turns out that the same line of reasoning would bring us to consider the second eigenvector associated with the second largest eigenvalue of matrix .

Exercises / Assignments

  1. Let be a column vector. Show that we indeed have . This is why we may convert the sum of all squared distances into a scalar product.
  2. Apply the above theorem implementing it using python / Tulip. Take any graph and ignore all edges – consider nodes as forming a point cloud where columns in are node properties (either properties imported with the data, or metrics computed for nodes in the graph). Compute the linear best fit for this point cloud.
    1. The visualization could even display the actual axes computed from the eigenvectors by drawing edges between dummy nodes places according to . Distinguish these two nodes by placing them away from the cloud and by adequately setting their size/color.
    2. (The trick we are implementing here can be easily generalized – you could well layout the graph using any two metrics to position nodes. We'll see a bit later how we can handle the multi-dimensional case.)
    3. You could even consider writing a piece of code computing the distance of a point cloud to any given line, in order to validate the line you computed as being the optimal one.

General case

Let us now turn to the general case: find a line not necessarily going through the origin that best fit the data (cloud of points).

What we face is tabular data: each line corresponds to a data item and each column is a (random) variable observed on the data sample. Now, in most cases variables will spread over different ranges and scales. Consider for instance an example where you would observe how a system temperature evolves along time. You would measure time in minutes spanning from 1am till midnight, temperature oscillating between -10C and 20C and the overall system power in watts, let's say. Now, all these values spread over different interval some negative, some positive, some possibly reaching high values and others being bounded to somehow low values. Now, because we plan to measure distances, it is likely that the coordinates embedding the variable reaching the highest values will win over all other coordinates, minimizing the impact of these low range variables.

One solution is to bring all variables within the same range of values, and trying to have their variation globally stick to the same interval. This is accomplish using a standard and classical transformation, centering variables around 0 and standardizing their variance to 1. More precisely, we transform values of a variable (column ) by applying the change

where is the mean value of the variable (column ) and is its standard deviation.

Exercises / Assignments

  1. Denote by the variable corresponding to a column in . Show that the transformed variable has mean value 0 and standard deviation (this is because our normalization uses , we'll see this is handy later on).

Now, observe what happened when we went from the original variables (column in ) to the centered and standardized ones: we centered it about the origin, and we scaled the point could (applying different scale factors on the different coordinates). Finding the line that minimizes the sum of squared distances to points can be done using the trick we develop in the preceding paragraph. The line can then be mapped back to the original point could by applying the inverse transform (scaling back to the original ranges) and translating it to where the cloud was sitting.

Exercises / Assignments

  1. Go for it. Take a dataset, and compute a 2D embedding for it. Compute how much of the total inertia you're embedding is able to capture, thus measuring just how faithful your representation is. You could use the Cars Dataset available from the Keim, Ward and Greinstein book website companion.
  2. What if we use PCA to lay out a graph? You need tabular data: nodes with several metrics or imported numerical attributes; or simply tabular numerical data (census data, etc.). Compute it's 2D PCA.

You might want to compare your solution to these two pieces of python code. The PCA class implements the basic algorithm described here. The GraphPCA class starts from a graph and builds a data matrix using a list of properties.

Now remember how this whole business went: the line we compute is determined by the eigenvector associated with the largest eigenvalue of the centered and normalized matrix (where matrix is obtained from by applying the normalization of variables (columns). Now, the total squared distances between the projection of points on this line tells us just how good this line is to capture what's going on in the could. Take for instance a situation where points would naturally align along a line while spreading a bit aside from it. The line would then be a rather good approximation for the could and the total squared distances would be just as close to what they actually are.

Now what if we take the second eigenvector associated with the second largest eigenvalue of the matrix ? We could also project points on this line and capture part of the total inertia of the point cloud.

What if we would use these two eigenvectors as coordinates to embed points in the 2D plane?

Additional exercises

Denote by the mean value for column and the line vector collecting the means for all variables, which can be seen as the gravity center of points associated with all data entries 's.

  • Let and denote two random variables (two distinct columns) with mean values, their covariance is defined as . Show that the whole set of covariances (for all possible pairs of indices ) can be computed in matrix form where denotes the original array, denotes the transpose of matrix and is a diagonal matrix whose diagonal entries are equal to . Observe that the matrix product of two line vectors makes it possible to produce a matrix containing the necessary quantities.
  • Variables may be centered by subtracting the mean value fro each of their entries, that is we may consider new variables such that . Show that this may be accomplished throught the matrix computation where 1 denotes the line vector with coordinates equal to 1.
  • Observe that the preceding computations may be extended to the case where elements are weighted, that is denotes the weight of element with . The matrix can then be replaced by the adequate diagonal matrix whose entries record the elements' weights.
/net/html/perso/melancon/Visual_Analytics_Course/data/pages/pca.txt · Last modified: 2016/03/10 10:02 by melancon