User Tools

Site Tools


data

Visual Analytics Course

Data

Data processing occurs ahead of visualization but nevertheless remains an important aspect of the visualization process. Many different types of data have to be dealt with when building a visualization systems: numerical data (reals or integers), text data (labels, but also categories to denote ordinals (ordered, non numeric, data), etc. We shall however focus onnumerical data, at least for now.

We will have a close look at various ways to build histograms or other types of curves. These curves may be useful to find models for the data at hand.

Exercises / Assignments

  1. Go to Ward et al. book website and grab the datasets that will be of interest during this course, make sure you are familiar with their content: Iris plants, Cars, Health related (UNICEF).
  2. Browse the web and find network data of varying size, from small to large and huge. Write dwn the URLs so you are able to report your sources and get back to the website from which data was borrowed.
  3. Analyse the datasets you have collected in terms of structure and meaning (try to understand why people would collect such data, what questions they might have they hope to answer by analyzing this data).
  4. (Borrowed from Ward et al.) A common task when dealing with data is dividing it into categories (clusters), such as low, medium, high. We'll have the opportunity to review a set of techniques to perform classification/clustering. For now, design an algorithm that divides data into a set of bins, using one (or more) of the following strategies. Discuss pros & cons of each of these strategies, illustrating them with a given datasets:
    • uniform bin width - the size of the bin (range of values) is the same for all bins;
    • uniform bin count (about the same number of elements in each bin);
    • best break points: start with all elements in a same bin, search for the largest gaps and divide at those locations. If no gap exist, break at values with low number of occurrences.
  5. (Borrowed from Ward et al.) Normalization is a process in which one or more dimensions are brought to the same range of values. This allows for easier comparison of dimensions. Design algorithms that perform normalization using one (or more) of the following normalization strategies:
    • bring all values to the [0, 1] interval;
    • values are mapped so that the resulting set has mean 0 and (standard deviation and) variance 1;
    • all values are integers between 0 and 255.

One good way to look at data is to format all available data into a table (this is what most data visualization software do anyway, at least from their API). Now, each line in the table correspond to a data item e_i. Each column then correspond to a variable observed on the dataset. Some of these variables may be considered independent variables, others will be more conveniently considered dependent. Other variables will be added along the way.

Data items X_1 X_2 X_k
e_1 x_11 x_12 x_1k
e_2 x_21 x_22 x_2k
e_N x_N1 x_N2 x_Nk

Note how indices are assigned as with matrices. In some cases, the data table (except for its first column) will indeed be considered as a matrix

Also, each column may be considered as a random variable (observed on the data sample).

Histograms

Histograms are useful to get an idea on how values (observed during an experiment, let's say) distribute over a domain. It helps answering questions such as “What is the most probable value(s) during the experiment”, “What is the mean of all observed values”, “Do values below or above the mean occur with equal probabilities?”, etc.

The simplest form of a histogram is to cut the range of values into bins of equal size h and then count how many elements fall within each bin. Hence the histogram may be seen as a (discontinuous) function with value ranging over the original range of the data and defined as:

= number of elements falling in the same bin as .

Thus, the function gives an idea of how frequently the value (or close to ) might be observed. This definition has an obvious defect, namely that the definition may well put into a bin with only a few elements thus assigning = low value (meaning that is not observed frequently), although is close to the next bin which may happen to gather several elements (meaning that values close to have a much greater chance of begin observed).

Hence, we might want to assign a value computed based on its neighborhood instead of fixed bins. Hence we may redefine as:

= number of elements falling within the interval

where is a fixed width defining a local neighborhood for – as opposed to a fixed bin.

Although this new function improves over the previous one, it still is unsatisfactory as the resulting function may well be quite noisy (erratic). What we want to do is to smooth out the resulting curve. One way to do this is to count elements close to and take into account their distance to . That is, we may assign weights and decide that elements closer to have a higher weight. There are a number of ways to do that.

Let be a function of a real parameter (roughly denoting the distance to ), that is decreasing as increases. Set and and let decrease linearly from 0 to 1. Assume the data sample comprises observed values . Now define as:

(The function has been indexed by to emphasize the fact that it does depend on the choice of this parameter . The parameter is usually smaller as the size of the data sample grows. Note that this parameter also occurs in the definition of the kernel function .) Obviously, by definition, only needs to be evaluated at elements sitting at a distance at most from . This last function is called triangular kernel function (why? can you guess?).

A required condition on the kernel function is that it defines a probability distribution function, namely that its integral over equals 1. This is indeed the case for the triangular kernel function. Another popular kernel function is the Gaussian kernel function defined by .

Gaussian kernels are widely used in computer graphics (2D Gaussian kernels are used to blur images, for instance). Note that this time the data sample needs to be fully traversed when evaluating (Eq. 3), unless we use a discretization of the Gaussian kernel, as is often the case.

Wikipedia lists a number of interesting variants.

Exercises / Assignments

  1. Think of a kernel function that defines function as in (Eq. 2).
  2. Browse the web and find applets or available code to play with kernel function and data smoothing techniques. Vary the parameter to understand and measure its effect on smoothing. A rule of thumb is that the choice of the kernel function is not as important as the choice of the parameter . Can you come up with a data sample that clearly exhibits this rule?
  3. Think of possible extension of data smoothing and kernel functions for 2D data. Again, look for code and interactive apps available on the web. Provide URLs and link your answers to your sources of information.

Normalization

Normalization aims at bringing data into interval of values so they can be compared. The amount of money people spend on housing and their education level (in number of years) spread over two different numerical scales. Prices for cars and fuel consumption also vary over completely different scales. Comparing these values to see whether there is some correlation requires that we bring them on a similar scale.

There are a number of ways normalization can be accomplished. Values spreading over an interval may be brought down to linearly using the formula . The comparison then relies on the fact that values spread over the same interval. That is, considering the column as a random variable, we compute a new variable by applying a linear (affine) transform to . It does not however take the distribution of values into account: although values sit in the same interval, their mean value might well differ (and most importantly their standard deviation).

Another way to go with normalization is to make sure the mean value sits at the origin while values all spread more or less the same way around this mean value: in other words, bring the mean value to 0 and normalize the variance of the data sample to 1. This is accomplished the following way. Let be data samples (real numbers or integers):

  • The mean is equal to
  • The variance is the mean square distance to the mean:
  • The standard deviation is

The normalized data sample is then obtained by computing which can be checked to have mean 0 and variance 1. What we did is we computed a variable from the original random variable .

Exercises / Assignments

  1. Take any two variables on a set of points. Use these two variables to embed the dataset in 2D (using each variable as a cartesian coordinate). Apply a normalization to the dataset and observe its effect on the embedding. (You could write simple python code and visualize this using Tulip.)
  2. Let be a random variable and set . Now, write , for the mean and standard deviation of , and , for those of . Then the variables and coincide. That is, no matter what linear (affine) transform you apply to a variable, it has a unique centered normalized form.

Miscellaneous

/net/html/perso/melancon/Visual_Analytics_Course/data/pages/data.txt · Last modified: 2013/10/08 10:29 by bpinaud