HUT / TCS / Research / CCC / Projects / ANNE / Background

Cluster

Main page


Background


Objectives


Resources


Results


References


Links

Background of the ANNE project

The study of nonuniform random graphs, also known as "small-world" or "scale-free" networks, has been attracting great multidisciplinary (and even popular) attention since the publication of the seminal Letter to Nature by Watts and Strogatz in 1998 [WaSt98]. It is indeed surprising how the a posteriori obvious nonuniformities in naturally occurring graph structures, such as social, biological, traffic or telecommunication networks, could escape notice from the research community for so long. In the past five years a veritable flood of papers have appeared to compensate for this omission, by working out the consequences of the fundamental observations contained in the Watts-Strogatz paper, and its important followup by Barabási and Albert [BaAl99].

A large volume of this recent research has been concerned with measuring the characteristics of natural networks, such as the World Wide Web, the Internet, or various social collaborative networks, and testing their fit to either the Watts-Strogatz small-world model or the Barabási-Albert scale-free model. Another important research stream has been concerned with the influence of network structure on various processes evolving on the network, such as the epidemic spreading of infectious diseases or computer viruses, or the resilience of a network to different types of local breakdowns (e.g. random failure vs. deliberate attack on the network nodes). A third research direction has been the mathematical study of nonhomogeneous graph processes, with the goal of understanding how the observed kinds of seemingly universal nonuniform network characteristics could arise from simple aspects of the generative model.

From a computational perspective, an important but relatively little studied research area is how the nonuniformity of network structure affects the behaviour of widely-used network algorithms; or more constructively whether and how nonuniformities in network structure can be used to advantage in designing algorithms that are more efficient on naturally occurring networks than the customary ones. Such issues are of interest in e.g. telecommunication applications, where the emerging technology of "ad hoc" networks, i.e. infrastructureless networks of wireless devices, is giving rise to a whole new range of questions concerning efficient and reliable routing algorithms, adaptive clustering of nodes for efficient topology maintenance, and distributed information sharing.

Furthermore, there are several interesting unresolved algorithmic, and to some extent statistical, questions concerning the analysis of very large nonuniform networks, most notably the World Wide Web. How can we reliably estimate the statistical characteristics of a network, if we at any given time have access to only a very small part of it? How do we even generate a random sample of nodes from such a network? (Note that e.g. in the case of the Web graph the node degrees vary over several orders of magnitude, so any naive approach is likely to yield heavily biased samples. There really is no obvious way of picking a "random" node from the Web, because we can only access new nodes via links from nodes that we have already seen.) The Web and other social networks also seem to be clustered in an interesting way into tightly connected subnetworks; how do we identify such clusters locally and efficiently, using only information available to us from a small part of the full graph?


Last updated January 12, 2004.
URL: http://www.tcs.hut.fi/Research/CCC/Projects/ANNE/background.html