Titre du sujet: Web Graphs and Web Algorithms

Encadrant: Ralf Klasing, Cyril Gavoille

Laboratoire et équipe de recherche: Combinatoire et Algorithmique

 

Description détaillée:

 

Large-scale real dynamic networks can be modeled as discrete random

processes which evolve over time. We refer to such models as web

graphs. Over time, new vertices and edges are attached to (or deleted

from) the existing web graph according to predefined rules.

Typically, such rules use a mixture of preferential attachment to

vertices of higher degree (copying), and random selection of

vertices. The copying aspect of the procedure mimics social behaviour

which often tends to follow the popular option. These processes,

introduced in the 1990's, differ substantively from the traditional

models of random graphs, introduced by Erdos and Renyi in the 1950's,

where the number of vertices remains fixed and all choices of edge

insertion are made uniformly at random. Web graphs were proposed as

models of the Web, whose degree sequence differed quantitatively from

the degree sequence of comparable random graph models. Subsequently,

web graphs were found to model aspects of many physical and social

processes (Small World graphs).

 

The aims of our research on web graphs is to develop accurate

stochastic models of existing large scale dynamic networks (e.g. the

Web, sub-communities of the Web, faulty communications networks,

peer-to-peer networks), and to use these models to develop efficient

algorithms for these networks. Our particular research aims are

outlined below.

 

(i) Models of web graphs.

 

We will refine the analysis of theoretical models of web graph

processes. In particular, there has been little progress in modeling

directed networks (especially where there is correlation between in-

and out-degree), and web graphs exhibiting vertex and edge deletion.

We will use web graphs to model peer to peer (P2P) networks and

dynamic networks arising in telecommunications, where the possibility

of random technical faults or spontaneous joining (and leaving)

behaviour has to be considered.  P2P networks are based on members of

a 'community' sharing resources openly with no globally imposed

structure, but with local control on the joining protocol (who points

to who). Typically, P2P networks exhibit contradictory requirements

between low node degree (fairness) and small maximum diameter

(closeness). As the joining and leaving protocol is (partially)

anarchic, the network must continuously restructure in a distributed

manner to maintain connectivity.

 

(ii) Web search.

 

To some extent it is impossible starting from a single node

(eg. Google) to effectively search the entire Web, which is a growing

network. Can we nevertheless design efficient search procedures for

web graphs?  One measure of efficiency is the proportion of a web

graph covered by a given search procedure. We will examine the

effectiveness of established graph search algorithms (which range from

random walks to breadth first search) on web graphs as a function of

the limited memory available for searching.

 

(iii) Connectivity properties of web graphs.

 

The problems related to connectivity of web graphs include measuring

and increasing the robustness of networks under random and malicious

deletion of edges and vertices.  One such connectivity problem we are

currently studying is algorithms for dominating sets in web graphs. A

dominating set of nodes is one which is adjacent to all nodes of the

graph. In relation to web graphs, a dominating set may be used to

devise efficient web searches. By storing the pages in a dominating

set, a searcher could use these pages to quickly visit all other pages

(eg: a minimal node index for Google). Web graphs are evolving over

time. Any algorithm for these graphs must be "on-line" in the sense

that the decision to add a particular vertex to the dominating set is

taken without knowledge of future structure.

 

(iv) Structural classification and the identification of

     sub-communities in the Web.

 

There is a strong need to identify special interest groups and 'secret

societies' in the Web using structural properties of the network. The

presence of such groups can be discerned by the higher than usual

linkage between their nodes. We aim to improve the available

algorithms for structural classification. Typically these can be

divided into graph algorithms for initial classification (finding

hidden vertex partitions) and probabilistic algorithms for improvement

of this classification (belief propagation).  In this topic,

structural approaches such as degree sequence partitioning complement

and compete with semantic methods such as latent semantic indexing and

other matrix methods based on rank.

 

 

References:

===========

 

[1] C. Cooper, R. Klasing, M. Zito:

    Lower Bounds and Algorithms for Dominating Sets in Web Graphs.

    Internet Mathematics (2005), to appear.

 

[2] C. Cooper, R. Klasing, T. Radzik:

    A randomized algorithm for the joining protocol

    in dynamic distributed networks.

    Theoretical Computer Science (2005), to appear.

 

[3] C. Cooper, R. Klasing, M. Zito:

    Dominating Sets in Web Graphs.

    In: Proc. Third Workshop on Algorithms and Models for the Web-Graph (WAW 2004).

    Held in conjunction with the 45th Annual IEEE Symposium on Foundations of Computer Science.

    Lecture Notes in Computer Science 3243, Springer-Verlag 2004,

    31--43.

 

[4] R. Klasing, Z. Lotker, A. Navarra, S. Perennes:

    From Balls and Bins to Points and Vertices. In Proceedings of the 16th Annual

    International Symposium on Algorithms and Computation (ISAAC 2005),

    volume 3827 of Lecture Notes in Computer Science, pages 757--766,

    December 2005. Springer Verlag.

 

http://www-sop.inria.fr/mascotte/Publications/?lang=fr&to_inc=Author/KLASING-R.html