A Look at Internet2

Copyright (c) 1999-2001 by Rich Morin
published in Silicon Carny, November 1999


Internet2 is a US-based, advanced research project in networking technology. Although it is still very much a "work in progress", it is already starting to show interesting results.

The infrastructure of the Internet has proven remarkably stable in the face of geometric growth rates and even outright abuse. Nonetheless, the system has some built-in deficiencies that need to be addressed if it is to carry us forward into the age of ubiquitous, real-time, multimedia networking.

And, despite its overall stability, the Internet is an unwieldy and expensive place to perform basic networking experiments. There are far too many players to coordinate and the noise generated by random users (including the screaming when things go awry) disturbs the researchers' concentration.

So, a collection of some 150 universities has joined with assorted high-tech corporations and government agencies to build Internet2, a prototype for the next generation of the Internet.

Lest you get the wrong impression, however, Internet2 is not so much a "thing" as an "activity". The Internet2 researchers are performing assorted experiments, using both existing and newly-created resources. The information produced by these experiments is the real product; the resources themselves may be borrowed or even temporary in nature. Internet2's Mission mission statement is short and clear, if a bit nationalistic in tone:

Facilitate and coordinate the development, deployment, operation and technology transfer of advanced, network-based applications and network services to further U.S. leadership in research and higher education and accelerate the availability of new services and applications on the Internet.

Thus, the toys will belong to (and directly benefit) only domestic players, but we can expect interesting results to trickle out to the rest of the Internet over time. Actually, I would expect the results to be pretty freely available: many of the researchers will be from other countries and some of the partner corporations (e.g., IBM and MCI) are distinctly multinational in scope.

Specific Objectives

Support for real-time and multimedia applications is crucial. The current Internet works fine for email and FTP, less well for complex web pages, and rather poorly for streaming audio and video, time-critical scientific experiments, etc. The reason, in brief, is that all packets are treated in the same manner.

Even though I might not care about the exact delivery time of a particular email message or FTP packet, this "bulk mail" gets the same handling as my time-critical audio stream. Worse, your web page downloads can get in my way (and vice versa). By establishing mechanisms for guaranteed "Quality of Service" (QoS), Internet2 can keep bulk traffic from interfering with time-critical data.

By doing this research in the cloistered halls of academia, the participants may be able to investigate the thorny allocation issues (whose data gets dropped, in a pinch?) in a relatively civilized setting. In any case, Internet2 is committed to provide QoS support, so the technical aspects of these issues (at least) will need to be worked out.

Once QoS is in place, a variety of advanced applications can be investigated. Teleconferencing and shared whiteboards are obvious starting points, as is real-time data analysis and experiment control. Demonstrations of network-based health care and environmental monitoring are also being planned, showing that the QoS support is quite serious.

There are some other issues, such as next-generation IP routing, which also need to be examined in a high-bandwidth setting. The Internet's 32-bit IP addressing scheme won't last forever, but the current Internet isn't the best place to try out changes.

A Typical Experiment

A recent high-profile experiment (read, demonstration) by Stanford University and the University of Washington, was fairly typical of the kind being performed. "HDTV Over Internet2 Networks" tested the ability of an active network link to carry multiple streams of High Definition Television.

HDTV provides a very high-resolution image (1920x1080 pixels), more than five times as detailed as the best standard (NTSC) TV picture. Not surprisingly, HDTV requires a lot of bandwidth. The HD video feed starts out as a 1.5 Gbps data stream (about 1000 T-1 links :-).

Fortunately, HDTV compressors can reduce this total substantially. For this experiment, both a "broadcast studio quality" version (140 Mbps, embedded in a 270 Mbps data stream) and a lower quality version (40 Mbps) were sent. Thus, some 310 Mbps of HD video were added to the normal traffic on a 622 Mbps (OC-12) network link.

Because the experiment was designed to test the ability of current networks to carry high-bandwidth data, no explicit QoS support was employed. Consequently, the experiment provides a baseline against which future QoS enhancements can be measured.

IP-based networks cannot guarantee instantaneous communication, so a substantial amount of buffering (several seconds) was used to smooth out any hiccups. This is quite acceptable for broadcasting television programs, but it would wreak havoc in any interactive application.

Because my brother Jerry was involved in setting up the experiment, I was able to come down to the Stanford facility and look things over. I even got to help uncrate and set up a $250K HDTV compressor! All told, perhaps $500K worth of equipment had been brought in specifically for the experiment: an HDTV camera, monitor, and recorder, a couple of video compressors, and some special-purpose interface gadgetry.

Despite the cost of the equipment, the setting was modest: a 20x20 workroom containing a communications relay rack and several mismatched tables full of electronic equipment. Most of the equipment, in fact, consisted of test systems for the Y2K-compliance testing project that normally occupies the room.

On the other hand, the room was located next door to Stanford's Network Operations Center (NOC). Consequently, it was well served by high-bandwidth network connections, knowledgeable staff, and other critical resources. In short, it was just about the perfect environment to conduct a small, high-speed networking experiment.

The receiving end of the demonstration, in contrast, was set in an auditorium at the University of Washington. A "Sony HDTV Videowall" was used to display the results. This kind of publicity effort, although useful in promoting particular applications, is very unlike the quiet research that characterized the early Internet (i.e., ARPAnet).

Unlike the ARPAnet, Internet2 isn't operating in a vacuum; the players know all too well that the world is watching them and waiting (a tad impatiently) for the results. I'm happy to report that the players seem to be taking their responsibility seriously. Consequently, we should have some delightful technology coming our way over the next few years.

About the author

Rich Morin (rdm@cfcl.com) operates Prime Time Freeware (www.ptf.com), a publisher of books about Open Source software. Rich lives in San Bruno, on the San Francisco peninsula.