User Tools

Site Tools


network_data

Visual Analytics Course

Network data

You can have a look at the slides I'll be using for this part of the course. Some talk about trees and hierarchical graphs, the others discuss planarity and spring embedders (force-directed layouts).

Networks are everywhere. Networks either exist because they are explicitly built: think of power network where nodes may be power sites, or dispatch point and edges are cables hanging between poles or hidden underground; think of computer networks (the internet) where nodes correspond to routers and edges correspond to actual physical connections between routers (classical cables, fiber cables, radio frequency connections, etc.). Networks also appear as abstract constructions, they are useful to model situations: most of the time, links correspond to interactions between entities modeled as nodes. Think of social networks; you are not physically linked to your friends, links correspond to email exchanges, chat session or instant messaging. You could also run a (written) survey in an organization and define a network based on who people exchange with at work.

Browse the web to find animations and/or applets, showing graph layouts in action. See the JUNG package for example.

Graph drawing is covered in excellent books. Have a look at some of them:

  • Handbook of Graph Drawing and Visualization. Roberto Tamassia, Editor, CRC Press (to appear).
  • Kaufmann, M. and D. Wagner, Eds. (2001). Drawing Graphs, Methods and Models. Lecture Notes in Computer Science, Springer.
  • di Battista, G., P. Eades, et al. (1998). Graph Drawing: Algorithms for the Visualisation of Graphs, Prentice Hall.

Go have a look at the 'zoo', as Jeffrey Heer calls it (Heer is the creator of prefuse and Vizster). The zoo contains lots of images of networks and discusses some of the layout algorithms.

Trees

Trees are familiar objects in computer science (data structures, decision trees, etc.), but also in numerous other areas. Think of trees used to depict organigrams of organizations or companies, animal species, etc. They are also among the simplest objects to draw.

Their time complexity is the lowest possible O(N), where N is the number of nodes (note that N also corresponds the number of edges in this case).

Walker's algorithm is a classical example. For convenience, the algorithm requires each node to point to its parent node. it also requires every node to point to its right ad left sibling

Being able to deal with node size requires additional improvments on te algorithm. See the paper by Stein and Benteler for more details.

Exercices / Projects

  • Implementing Walker's tree drawing layout. This paper is one of a series of papers on classical top down, hierarchical drawing of trees where nodes sit at a distance from the root proportional to their level in the tree. The algorithm runs in linear time and guarantees to use minimum width (by traversing the tree twice, once from the root to the leaves and back to the leaves).
  • Implementing the radial layout for trees. See Eades 1992 paper.
  • Implementing Eades' H-tree layout for binary trees.
  • Implement the nested boxes (onion graphs) algorithm for trees by Sindre et al. 1993.

Hierarchical graphs and planar graphs

Other classes of graphs have received special attention from the Graph Drawing community.

Hierarchical graphs, also known as directed acyclic graphs (DAG) do not contain cycles which allows to rank them from top to bottom. The absence of cycles indeed makes it possible to define the rank of any node, where nodes with no predecessors have rank equal to 0 (why are there necessarily nodes with no predecessors ?). Nodes are usually placed on layers according to their rank. The problem then boils down to sorting nodes as to have a minimum number of edge crossings. Note however that the underlying optimization problem is NP-complete, even when there are only two layers.

These optimization problems can become quite tricky and involve complexity theory. An excellent introduction to the crossing minimization problem is Chapter 5 of the Handbook of Graph Drawing.

Planar graphs are graphs that can be drawn by avoiding edges to cross. Testing whether a graph is planar can be computed in linear time. Planarity algorithms are however quite complex and often rely on the use of sophisticated data structure. Indeed planarity relates with deep topological properties of a graph.

More information on these two classes of graphs may be found in the Graph Drawing book.

General graphs

What are the options when a graph has none of the above properties? How can we draw such a graph? There are several options.

Spanning tree

One popular solution to display a graph is to rely on the extraction of a skeleton structure. A spanning tree is such a skeleton, which moreover may be laid out using any tree layout algorithm.

Exercices / Assignments

  1. Implement such a solution using Tulip / python.
Spring embedding / Force-directed layouts

Using a spanning tree however may introduce a bias in the graphical represtation, letting a user think that the tree structure dominates the overall structure of the graph.

Another popular approach popularized by Peter Eades in his 1984 paper:

Eades, P. (1984). "A Heuristic for Graph Drawing." Congressus Numerantium 42: 149-160.

later to be followed by others, improving different aspects of Eades' original idea. (Fruchterman & Reingold 1991) variation is still much used and implemented, just sa GEM (Frick et al. 1994). See also Stephen Kobourov's chapter (part of the Graph Drawing Handbook)

The general idea is the following. Think of a graph as made of charged metal marbles; because marbles are charged, they tend to repulse each other. Now, whenever two nodes are linked by an edge, we think of the edge as a metal spring holding marbles together. hence, whatever forces will act on the marbles, the spring will make sure they don't get too far away from each other.

All force-directed algorithms rely on a similar metaphor. The core algorithm of any of these layout bils down to simulating the system, letting forces act on nodes and edges. In many cases, because the induced (physical) system approaches a stable state, the algorithm will produce a “pleasing” layout of the graph.

The fact that the algorithm more or less runs a simulation of the underlying forces allows one to animate it. The algorithm can also react to user interaction (moving nodes around). Have a look at the Visual Thesaurus that uses force-directed layout to display links between dictionary entries.

The simulation roughly follows the following lines:

Let p_u denote the position (in 2D or 3D space) of node u
Let f_u denote a vector acting on p_u (which can be seen as a vector just as well)

randomly initialize positions p_u

repeat several times (often taken as O(|V|))
  for all u in V
    set f_u = 0
    for v in V
      f_u += f_r(u,v) where f_r(u,v) is a force repulsing u from v
    for v in N(u)
      f_u += f_a(u,v) where f_a(u,v) is a force attracting u to v

  for all u in V  
    move u by a fraction of f_u

Now, because the internal loop goes over all pairs of nodes u, v, it runs in time O(N^2). Since the loop is repeated O(N) times, the overall time complexity of the algorithm is in O(N^3) which is rather high.

Exercices / Assignments

  1. (Frishman and Tal 2008) designed an improved version of the spring embedding scheme as to layout a dynamic graph. The graph is seen as a series of graphs G_0, G_1, …, G_k, where G_i+1 is obtained from G_i by adding/deleting some nodes/edges. Read the paper, and implement their algorithm within Tulip/python. The trick is to design and compute pinning weights for nodes in order to find them a consensus position throughout the graph sequence, to obtain better readability and keep a stable mental map for the user.

Network metrics

Network data differs from high-dimensional data as it is equipped with a topology – a structure built from the set of links between nodes. Being able to infer properties based on topology is a wonderful game. Graph theory is a bit about it. Network metrics offer a useful angle to identify key nodes in the network. Typically, you'll want to identify nodes that gather a maximum of links, those that hold a central positions (we'll what central may mean), etc.

Local metrics Distance Based Metrics Iterative metrics
Closeness centralities Betweenness centralities Random centralities Feedback centralities
Eccentricity Centrality
Node degree , Harary Status Beauchamp Closeness Centrality Stress centrality Random walk centrality Katz Status Score
Clustering coefficient Graph eccentricity Dangalchev closeness centrality Betweenness centrality Random walk betweenness centrality Eigenvector centrality
Burt's node constraint Integration Graph centrality Bridging centrality Random walk closeness centrality Bonicich's bargaining centrality
Shannon entropy Radiability Current-flow closeness centrality Differencial Betweenness Hubell Status Score
Burt's hierarchy index Centroid value Reach centrality Page Rank
Guimera's participation coefficient Information centrality Current-flow betweeness centrality HITS
SALSA
Spreading activation
Strahler numbers

The above table lists different graph (node) metrics organized into a taxonomy. Roughly speaking, metrics are distinguished according to their time complexity.

  • Local metrics only involve looking at neighbors of a node, with node degree as its archetype
  • Distance-based metrics all require to compute distances between nodes in the graph, which turns into a higher time complexity
    • Closeness metrics measured how 'close' nodes globally are to all other nodes in the graph, betweenness centralities aim at measuring how 'central' nodes are – each time using distances as their core ingredient
    • Betweenness centralities involve computing shortest paths between nodes
  • Iterative metrics require to traverse the graph along any/all paths – they may be distinguished from betweenness centralities because they all can be implemented using matrix calculus
    • Random centralities rely on random walks (traversing the graph randomly along paths)
    • Feedback centralities incrementally compute values as the graph is traversed
Local metrics

Node Degree.

This is an archetypal metric – maybe one the oldest metric. Its role is vital, not only because it captures a basic and fundamental mesure on a graph, but alos because it is often easily interpretable by users. Standard node degree (number of neighbors) may be generalized to the case where edges carry weights (positive real numbers) . Edge weights may also be defined as a fucntion where when . We then consider the weighted degree of nodes, defined as:

Clustering coefficient. This measure was introduced by Watts and Strogatz in a seminal paper.

  • Watts, D. J. and S. H. Strogatz (1998). “Collective dynamics of “small-world” networks.” Nature 393: 440-442.

The clustering coefficient measures just how much a node sits in a tightly connected neighborhood. Looking at all neighbors v \in N(u) of a node u, it computes a ratio comparing the number of links between neighbors node to the clique (complete graph over the set of neighbors). That is, c(u)= |E(N(u))| / k(k-1)/2, where E(X) denotes the set of edges between nodes in X and k(k-1)/2 is the number of edges in a complete graph with k nodes.

Shannon entropy. This is a metric Shannon introduced in a seminal, and now historical, paper.

  • Shannon, C. E. (1948). “A Mathematical Theory of Communication.” The Bell System Technical Journal 27: 379-423, 623-656.

Shannon's entropy measure just how much a node depends on a single other node – roughly speaking, a node that must go through a single other node to reach all other nodes in the graph has low entropy. On the contrary, a node having a high number of alternative routes to the different parts of the graph has high entropy. Shannon's entropy may be defined using edge weight w(u,v) associated with egdes (u,v) incident to a node v and is computed as S(u) = - \sum_v \in N(u) log(w(u,v)) (requiring that weights add up to 1).

Shannon's entropy is maximal when all weights are equal, and decreases as weights concentrate on a single node.

Exercices / Assignments

  1. Browse Burt's paper, or search the web, implement and xperiment with Burt's constraint and hierarchy metrics. Compare these metrics with other metrics using real graphs or artificial datasets.
  2. Implement and experiment with Shannon's entropy, comparing it with node degree, clustering coefficient and Burt's metrics.
Closeness centralities / Eccentricity

Harary Status. This metric due to Harary was also introduced by Shimbel in an earlier paper.

  • Harary, F. (1959). “Status and contrastatus.” Sociometry 22: 23-43.
  • Shimbel, A. (1953). “Structural parameters of communication networks.” Bulletin of Mathematical Biology 15(4): 501-507.

The metric depends on nodes distances d(u,v) in the graph and is defined as h(u) = \sum_v \in G d(u,v). As a consequence a node with a lower status value is globally closer to all nodes in the graph.

Graph eccentricity. This metric due to Harary was also introduced by Shimbel in an earlier paper.

Exercices / Assignments

  1. Read Arbesman and Christakis's recent paper, implement and experiment with their insularity metric. This requires to identify communities in the network. Use, for instance, one of the available clustering algorithm (with Tulip). ing coefficient and Burt's metrics.

Arbesman, S. and N. A. Christakis (2010). “Leadership Insularity: A New Measure of Connectivity Between Central Nodes Networks.” Connections: bulletin of the International Network for Social Network Analysis 30(1): 4-10.

Computing network metrics on larger graphs

This section looks at a strategy one can implement when a metric needs to be computed on a large graph. One such strategy is to sample nodes on the graph, that is compute the exact metric but for fewer nodes.

Yuntao, J., J. Hoberock, et al. (2008). "On the Visualization of Social and other Scale-Free Networks." Visualization and Computer Graphics, IEEE Transactions on 14(6): 1285-1292.

Another approach is to compute an approximation of the target metric using a faster algorithm. Since metrics are often used to induce a colormap on nodes or edges, an approximation often does the job.

The trick is the following:

Perform a random walk on the graph
   Select a node
   Iterate long enough
      Go to a neighbor node and "do something"
         (local computation, store a value, etc.)
   Once you're finished hopping through nodes,
      run an iteration on nodes to collect,
      compute a final value

A walk to compute node degree

Let us first look at a simple example, that of computing the degree of nodes. This computation is linear in the number of edges of the graph (and we obviously do not need a faster algorithm here, but we do it for the sake of illustrating our approach).

Equip each node with a counter initially set to zero
Run a random walk on the graph
   When on a node, hop to a neighbor by selecting one at random
   Each time you visit a node, increment the counter by one
When finished, assign nodes the value counter/number of steps in walk

If the walk is iterated long enough, the value assigned to a node turns out to be (almost) proportionally equal to .

To see this, observe that a random walk can be implemented using matrix algebra. Indeed, define a matrix indexed with nodes where equals the probability of reaching node from node , that is .

Indeed, consider the vector (we need it to be a column vector) with a single 1 at position . The matrix multiplication then equals where exactly when is a neighbor of . This precisely says that the walk can reach any of 's neighbor in a single steps, all with equal probability. Now starting from this vector storing all possibilities, we may iterate to simultaneously compute all possibilities of reaching nodes through two steps starting at . And so on and so forth.

Iterating this again and again, defining vectors , we obtain a limit vector giving the probabilities if ending on any node (starting from ) when randomly walking on .

Now, we can precisely compute this limit vector. It is equal to . To see that it is indeed the limit vector, we may show that it is stable (it is a fixed point) since we have .

All in all, we have shown how one may obtain a relatively good approximation of node degree using a random walk.

A walk to compute node centrality

The trick we have shown actually is quite useful when dealing with metric having high time complexity, and where one is ready to accept an approximation of a metric instead of it s genuine value.

We will now look at how a random walk can help estimate node centrality, instead of using the betweenness centrality for nodes which is known to have high time complexity.

This idea is based on a work by:

Kermarrec, A.-M., E. Le Merrer, et al. (2011). "Second order centrality: Distributed assessment of nodes criticity in complex networks." Computer Communications 34(5): 619-628.

using a random walk. The authors observe that when randomly walking on a graph, assuming nodes have equal probability of being visited, central nodes are revisited on a more stable basis. That is the time you need to revisit a central node is quite stable, as opposed to non central nodes which you turn out to visit in an unstable manner – so they claim.

The idea then is to design a random walk and collect the ticks at which the walk goes through each node, and then compute the standard deviation of the time differences. Lower values for this metric then help to identify central nodes.

There is one thing though that must be looked at. For the metric to be correct, every node must be visited about the same number of times. Now, we know that the usual random walk goes through a node a number of times that is proportional to its degree. We would however like the walk to go through nodes (approximately) the same number of times, so the routing process must be slightly modified.

The trick here is to use the Metropolis-Gibbs algorithm. There is an important observation to make about the previous random walk process computing the degree of nodes. The values the walk converge to can actually be viewed as a probability distribution on nodes. What the Metropolis-Gibbs algorithm does is modify the routing procedure to force the walk towards any probability distribution we wish. In our case, we wish to force the walk to go towards the uniform distribution (all nodes with the same probability ).

The algorithm proceeds as follows to define the new walking procedure based on the previous walking procedure :

When arriving on a node v, select the next node v' towards which the walk W sends v
Consider the ratio pi(v')/pi(v), using the target probablity distribution pi
Draw a random number p in [0, 1]
   If p < pi(v')/pi(v) then go to v'
   Else stay on v and try again

Exercices / Assignments

  1. Implement these two random walks using Tulip/python
  2. Compare the second centrality metric with the usual betweenness centrality on nodes (for smaller graphs)

Bipartite graphs

/net/html/perso/melancon/Visual_Analytics_Course/data/pages/network_data.txt · Last modified: 2016/03/31 21:37 by melancon