User Tools

Site Tools


high-dimensional_data_and_ward_hyugen_s_principle

Visual Analytics Course

High-dimensional data and Ward & Hyugen's principle

We now state a theorem that will appear as a founding principle for most clustering algorithm, even for graphs. The proof of the theorem uses the same ideas and techniques that the one we pointed at when discussing dimensionalty reduction (embedding high-dimensional data in low dimension Euclidean space). We had defined the inertia of a point cloud as the sum of all squared distances between pairs of points. We also saw that this is equivalent to computing the squared distances to the barycenter of (center of gravity). We now look at what happens when we consider several groups of points, each having its own local barycenter.

Ward & Huygen's principle

Let be a dataset, with elements . We assume elements are equipped with weights such that . Assume also that the set is divided into groups .

  • We may then define the weight of a group as the sum of weights of the points it contains .
  • We may define the local weight of an element as its weight with respect to 's weight. More precisely, we set .
  • We may also define the barycenter of the group as .

The theorem then states that the total inertia equals the sum of squared distances from points to , plus the squared distances from local barycenters to . In equation:

Proof. The proof is actually quite simple and mostly relies on the fact that distances are computed using the usual vector scalar product. We unfold the leftmost term:

where we have used the fact that , since:

and

Exercises / Assignments

  1. Recall we had underlined (see section on Principal Components Analysis (PCA)) the fact that inertia is equivalent to computing the sum of all weighted squared distance to the center of gravity of the cloud of points. More precisely, we can show that .
  2. Implement a greedy strategy for finding a clustering using this generalized Ward framework.
  3. Ward and Huygen's principle extends to the non Euclidean case, as shown by (Batagelg 1988). Read Batagelg's paper and implement the generalized Ward principle so as to use any dissimilarity.

A basic algorithm: $k$-means

This algorithm, due to (Hartigan & Wong 1979), is quite simple, as it uses Ward and Huygen's principles and perfectly illustrates it. The “” in -means correspond to the number of classes the algorithm computes. This is tricky: we may well be unaware of the optimal number of classes. The algorithm tries to adjust the data into the requested number of groups by first selecting candidates as “centers” for these groups. Data elements are then assigned to a group based on their proximity to the barycenter (center of gravity).

// first phase
denote by g_1, ..., g_k the barycenter of
groups C_1, ..., C_k
randomly select k data elements e_1, ..., ek
and assign g_i = e_i
loop over all e \in D
  assign element e to group C_i for which d(e, g_i) is minimum

// second phase
loop until some stopping criteria is met
  (re)compute g_i = barycenter of elements in group C_i
  loop over all data elements e \in D
    assign elemente to group C_i for which d(e, g_i) is minimum

When things go well, groups stabilize. As a consequence the barycenters converge to some points so the variation of their positions can be used as a stopping criterion. Degenerate cases may however require to use alternate stopping criterion.

Exercises / Assignments

  1. Implement the -means algorithm to experiment with the algorithm (using different datasets, changing for a same dataset – try the Cars dataset).
  2. -means is unable to detect clusters with unusual shape: this is because it uses Euclidean distance to evaluate proximity to a class. Experiment with -means using clusters of irregular shapes and report on its behavior.
  3. Design a variation of -means that works on a graph using graph distance.

Hierarchical clustering

Agglomerative / Ascending / Bottom to top aggregation
  • The clustering process is initiated by considering each element as being in its own class
  • Classes that sit closer to each other are merged first
  • The process ultimately leads to merge all elements in a single class

Variants of this schema use different ingredients to define and compute class proximities; different merging strategies may also be defined to solve ambiguous situations (when several classes sit at a same distance, for instance).

When dealing with high-dimensional Euclidean data (points ), proximities between classes (or clusters) are usually computed using one of the following three approaches (we borrow these definitions from the seminal paper by (Guha et al. 1998)). Denote by the barycenter of class .

  • simply is the Euclidean distance between the barycenters of classes and

The CURE algorithm by (Guha et al. 1998) attempts at overcoming problems experienced with -means, that of being unable to detect clusters with irregular shapes. Roughly speaking, this is accomplished by using sample points to describe each cluster (instead of solely using the barycenter). Proximities between clusters are then measured using the two closest points among all available samples. Shrinking the sample points towards the barycenter moreover reduces the effect of outliers and stabilizes the overall algorithm behavior. The overall algorithm thus looks like (borrowed from the wikipedia description of CURE):

Input : a set S of points e, f, ...
Output : k clusters

initialize clusters C_1, C_2, ... as formed by singletons

for each cluster C_i, compute its barycenter g_i,
   and select a set of c representative points
# initially c = 1 since each cluster has one data point
# c is a parameter of the algorithm
denote by C.closest the cluster C' closest to C,
   and store the distance d(C, C.closest)
denote by C.representatives the set of representative points in C
  
arrange clusters in a heap, placing cluster
   with smaller distance d(C, C.closest) at the top of the heap
representative points of clusters are stored in a kd-tree (binary search tree)
to improve search when computing the closest cluster after a merge

repeat until the number of classes reaches some target threshold (usually a number k of clusters)
  remove the top element C in the heap and merge it with its closest cluster C.closest
  compute the new representative points for the merged cluster C'
  remove C and C.closest from the heap
  update D.closest and relocate D, for all the clusters D 
  insert C' into the heap

Now, the distance between two clusters is computed using choosing the closest pair of points among all pairs formed with representative points. The selection of representative points is performed by iteratively selecting a point that is farther from all previously selected representative points. After being selected, representative points are shrunk towards the barycenter by a factor , that is a representative point is stored as where is the barycenter of a cluster (note however that these new points are solely used to compute distance between clusters, so the original data is not modified).

CURE uses relevant data structure in order to minimize the time spent at searching for closest elements. Because CURE originally processes points in Euclidean space, a -tree is used to store cluster points, as well as representative points (since they are used when computing distance between clusters). However, the property we wish to emphasize here is the ability of CURE to identify clusters of varying shapes, which is accomplished using representatives together with distance between clusters.

Exercises / Assignments

  1. Implement and experiment with the CURE algorithm, using various distance metrics (Euclidean, Manhattan, Minkowski, etc.).
  2. Design an extension of the CURE algorithm for graph using the graph distance.
  3. Implement a variant using Ward's principle, each time grouping clusters with minimal internal inertia loss
    1. Use Tulip to run this algorithm, and plot the inertia curve as the process evolves to help choose a “best” clustering
Divisive / Descending / Top to bottom division
  • The clustering process is initiated by considering a single class containing all elements
  • Each class is then divided into sub-classes using a “best” cut
  • The process ultimately ends when classes are singletons

Variants of this schema use different ingredients to define and compute a “best” cut; different divisin strategies may also be defined to solve ambiguous situations (when several “best” cut are possible, for instance).

/net/html/perso/melancon/Visual_Analytics_Course/data/pages/high-dimensional_data_and_ward_hyugen_s_principle.txt · Last modified: 2016/05/17 17:31 by melancon