Creating a map of the world once involved listening to travellers' tales, sketching in a few interesting highlights and drawing dragons in the otherwise empty corners. When cartographers began to visit far-off lands, accuracy improved greatly and dragons disappeared.
There are no good maps of the internet because its cartographers are desk-bound. They run programs to create long lists of the routers that make up the network and send out test traffic to reveal the connections between them. Separately, they consult public routing protocol databases to establish the connectivity options of the network operators.
The atlases that result are strong on fine-grained maps of linkages, but they give little sense of structure or meaning. It is rather like mapping Africa by soliciting postcards from every sub-post office, checking the stamps for their country of origin, then drawing a spidery representation of every postal carrier's beat on a large sheet of paper.
Furthermore, internet map-making methods are deeply flawed. In the real world, routers can be invisible when providers use network technologies such as ATM or MPLS (the net is full of acronyms that remain meaningless when spelled out). Hence the lists of routers are incomplete, just as if some African post offices took no notice of postcards. In the real world, there are many routes between nodes but they are not mapped because they are for emergencies only - just as paths through crocodile-infested swamps will be avoided by postal workers whenever possible. At the organisation level, where routing is from one autonomous system (AS) to another, maps can be especially meaningless, just as if you failed to distinguish between South Africa and the little bit of Lagos that is the San Marino embassy garden. Level 3 operates a global AS with links on five or six continents, whereas a small hosting provider may have an AS that fills just half a bedroom. Recently, thousands of ASs have been created to support "multi-homed" companies that buy connections from two different internet service providers for (imagined) resilience. Mapping AS space or plotting the connections will have little meaning unless you leave your desk and talk to the network operators.
Talking to the operators will also help reveal the constraints on internet connections. Engineering efficient links to nearby nodes is only a small part of the puzzle. Hardware limitations and the economics of sending traffic via competing routes are the real drivers. Operators will talk of "free peering" with companies of their own size and buying "transit peering" from multinational "tier one" providers. They will talk of the economics of joining internet exchange points, where 100 companies swap traffic, or of "private peering" on a one-to-one basis. The internet is shaped by economics and competition.
And so we come to this book. It claims to be about the evolution and structure of the internet, but that is misleading. It does not mention "peering", "multi-homed" or any of the real-world elements I have cited.
This book is about graph theory and academic attempts to build mathematical models with the same connectivity properties as people have measured, probably inaccurately, on the global internet. It is interesting enough, full of equations and as up to date as you can reasonably expect. But it tells you little about the global internet.
The book applies the methods of statistical physics to examine the structure of the internet at the macro level. The measurements made of nodes, links and paths, the power law distributions discovered and the long tails to those distributions have led the authors to conclude that the net is "scale free". The authors give details of many of the networking models that have been proposed, and the way simple connectivity rules create complex graphs. They then show how well, and often how badly, their statistics correspond with real-world measurements. This is the meat of the book, and it would be well suited to an MSc course on internet modelling.
The chapters on "overlay networks" such as the web, email and peer-to-peer are limited in scope, and that on internet worms lacks recent results.
The book's core is solid enough, but it has the wrong title. I fear for these cartographers in the real world; I suspect they will not be able to recognise a dragon before it bites them.
Richard Clayton is a researcher in the Computer Laboratory, Cambridge University. He spent many years working for an internet service provider.
Evolution and Structure of the Internet: A Statistical Physics Approach
Author - Romualdo Pastor-Satorras and Alessandro Vespignani
Publisher - Cambridge University Press
Pages - 267
Price - £40.00
ISBN - 0 521 82698 5