What really happened in digital technology in the past five years.
Precious little has happened in digital technology over the past five years.
Steady increases in processor speed and storage size have become as predictable as a child's growth. Only those far removed from the phenomenon even notice - Uncle George ruffles technology's hair and exclaims, "My, you've grown fast!" The rest of us just buy what we need and plan on trading up in six months.
What innovations have been produced - object-oriented programming, spread-spectrum radio - are mostly on a par with the periodic return of wide ties. They prompt little more than a reach back into the drawer to pull out that fading business plan from when both fields were last hot - the late 1970s.
I hear you clamoring, "What about the Net?" Well, yes, but what is more surprising is how little even that really changed. At the network's physical layer - its fiber-optic backbone - a couple of real breakthroughs have occurred. But these have merely allowed bandwidth to continue increasing at the same monotonous rate as it has for the last 20 years. And predictable progress hardly feels like progress at all.
As for the rest of the networking stack - how computers talk and what they talk about - we have witnessed incredible resistance to change. Almost all attempts at innovation have been squashed. And it isn't because these innovations were bad or impractical; existing standards have simply become too pervasive for us to even think about changing them.
What we're talking about here is a phenomenon called "network effects." First studied by economists such as Stanford's Paul David, network effects are peculiar to technologies like electrical power and internetworking, which put a premium on compatibility. The concept is not difficult: When, say, 10 corporations decide to adopt alternating current rather than direct current as their electrical standard, AC becomes not 10 times, but more like 100 times, more attractive to subsequent decisionmakers. The result: technology lock-in.
Just look at IP, which defines how data is cut up into packets and addressed in order to reach its destination safely. During the 1980s it was one of many similar standards. Indeed, most thought it was going to soon be displaced by another protocol, OSI, which had the US government's support. But while the bureaucrats fumbled, IP continued to add users. By 1992 there was no turning back: IP had reached lock-in.
The last five years in network technology have been about watching this single fact run its course. Like some kind of technological Godzilla, IP has gobbled up WANs and LANs, leaving behind a trail of dying equipment vendors who supported such now-forgotten protocols as DECNet and IPX.
Even more dramatically, IP now stands poised to take control of the world's phones systems. While Bellheads nattered on about how IP simply didn't make sense for real-time traffic, that it could never provide the same voice quality as the circuit-switched phones network, IP snowballed right over their objections. But with Internet traffic growing 300 to 400 percent a year, and with voice traffic increasing by only 7 percent, the writing is plainly on the wall, even for myopic telcos: It's easier to put voice over the data network than to build two separate networks. And - whomp! - the IP snowball rolls on.
Network effects and predictable progress have one clear benefit: they allow high tech firms to plan their businesses. Now that it's clear that IP has won, application vendors can develop programs without worrying that the technological rug will be pulled out from under them when they are finally ready to ship. And thanks to the predictable progress formalized in Moore's Law, chip manufacturers can amortize their costs, secure in the knowledge that a fabrication line will be obsolete after a year and a half.
Powerful forces, to be sure. But they have also conspired to eliminate any real excitement.
It's at the physical layer of the Net - its mostly invisible plumbing - that IP's success has fueled the most important developments. As usual, raw necessity was the reason: the carriers completely screwed up.
During the 1980s, the big carriers - Sprint, MCI, AT&T - deployed fiber-optic cable based on assumptions that voice traffic would continue to grow at a fairly anemic 7 percent annual rate, with fax and data growth maybe three times that. Eager to make hay about "sound quality so good you can hear a pin drop," the three stooges raced to deploy systems capable of unprecedented speeds: hundreds of megabits per second!
But in their rush, the telcos made two big mistakes: First, they deployed only a few strands of cable, even though the difference in cost between deploying five fiber-optic strands and 50 is negligible. Second, they deployed most of it by the quickest means possible: they buried it in trenches. That's faster and cheaper than laying conduit, but it makes adding more capacity later unthinkably expensive.
The carriers discovered their error when data traffic, powered by the booming Web, began by the mid-1990s to expand not at the 20 to 25 percent annual growth rate they had expected, but at more like 400 to 700 percent. Suddenly they needed a way to get more data over those buried lines.
The first part of the answer came with the development of wave-division multiplexing. WDM allows multiple light wavelengths to travel simultaneously down a fiber-optic strand. Instead of just blue light, for example, a cable can also carry red, orange, and yellow. WDM boxes deployed at network endpoints allow the carriers to neatly multiply the capacity of their networks. And as vendors such as Ciena Corporation, Alcatel Telecom, and Lucent Technologies vie for the market, technologies that support 48 and even 100 channels per strand have evolved, allowing for further bandwidth increases.
But the carriers weren't saved yet: every time you increase the speed of a conventional electro-optical network - an event that has become mind-numbingly routine - you have to replace all the electronic equipment attached to it. That's expensive. The best examples are the amplifiers attached every 100 or so kilometers along the backbone. A light signal gets dimmer the farther it goes, until eventually it can't be distinguished from background noise. Amplifiers prevent this diminution by converting the optical signal into electrical bits, then using those to generate reinvigorated photons. This is slow and inefficient. Worse, every time you change the speed of the network, you have to replace all the amps.
Some neat innovation once again saved the day. Scientists discovered that they could create an all-optical amplifier by treating a short section of fiber with the rare-earth metal erbium, an element that becomes chemically "excited" when struck by light and emits an amplified light wave. What makes erbium-doped amps so perfect is that metals are not smart: they just dumbly amplify whatever signal they see, so the same amp will work whether you're using 8-way or 100-way WDM, whether you're running at 1 or 100 Gbps.
This has ushered in what David Isenberg of AT&T calls "the stupid network." Or, perhaps more appropriately, it should be called "the transparent network" - transparent to speed and transparent to content. Lucent and others are now developing other all-optical components, including cross-connects and switches, that will bring the same properties to the rest of the network. The effect is to reduce what engineers call "friction" in the system and to allow for more frequent (read: more economical) upgrades. Just as the computer industry is predicated on Moore's Law - that chips will double in speed every 18 months, which companies can literally plan on - the telecom industry can be predicated on the transparent network. Change is routine and uneventful. It's progress, but what the late great Austrian economist Joseph Schumpeter disdainfully called "automatized progress."
At the Net's next level up, data link, the main accomplishment has been to squash change altogether. Data link defines how bits are packaged. In the 1980s, Ethernet became the LAN standard. For WANs, it's frame relay. One decade - and several orders of magnitude growth in traffic - later, both still reign supreme.
It wasn't supposed to be this way. Each of the last five years was supposed to be the Year of ATM - asynchronous transfer mode, a new standard that would unify not only LANs and WANs, but also voice and data traffic. ATM was going to run everywhere and everything; it would be equivalent of the standard-gauge railroad track that unified Europe.
Perhaps ATM failed because of its brain-dead and bureaucratic design. As a practical matter, however, network effects deserve the credit.
LANs use the 24-year-old Ethernet protocol not because it's particularly good, but because everyone is used to it. To be sure, today's Gigabit Ethernet has been tweaked to adapt to demands that were unimaginable back when Bob Metcalfe was developing its ancestor at Xerox PARC in the 1970s. But Gigabit Ethernet is still Ethernet: it uses the same format for destination addresses and checksums that is ingrained in hundreds of millions of lines of code throughout the world. Any deviation is untenable, no matter how much wealth and happiness ATM advocates promise.
The story is almost exactly the same for WANs. Frame relay was never intended to go faster than 56 Kbps; then 1.5 Mbps was said to be the maximum, and then 45 Mbps. Somehow, frame relay has always been made to go faster. And it has to: there are simply too many endpoints out there that can speak frame relay.
Above data link is the network layer, the lair of IP itself. I've already described how IP decimated competing standards - DECNet, IPX, and OSI. Indeed, the momentum behind it is so strong that even advocates of IP version 6 - whose most substantial change is adding a few more address bits so that we don't run out of unique addresses - are finding it impossible to get any footing.
IP's momentum was set in motion many years ago, and for the past five years we have only been witnessing the inevitable. But what has changed is the impact of that lock-in on the equipment used to route IP packets. Hence, a corollary to the law of network effects: As a standard becomes accepted, it not only becomes harder to change, it also becomes part of the hardware.
There is a parable to accompany this pronouncement. The crown jewel of Cisco Systems, the networking-equipment behemoth, has always been its software. The best router is one that knows the most protocols, and that's what made Cisco king - its routers smoothly speak everything from AppleTalk to SNA. But as everything but IP fades into irrelevance, being able to support AppleTalk is very small milk. What matters is being able to handle IP quickly. And that's a job for specialized hardware designed to do just one thing amazingly well.
This is the cause of the "switch versus router" marketing malarkey that you may have been hearing for the last year. A switch does more processing in hardware and is therefore faster, albeit less flexible, than a router. It may sound like an innovation, but it's more of a simplification. And while it may alter the balance of power among networking vendors, it reflects no fundamental technological change.
There's one last network layer worth reviewing: application protocols. As little as five years ago, there was FTP for file transfer, Gopher for information exchange, NNTP for network news, and others. But the last five years have seen the same kind of consolidation here as we've seen elsewhere on the Net. In this case, the spoils go to HTTP.
Hypertext Transfer Protocol won because the Web, which is based on HTTP, appeals to the largest possible number of users. As the Web's popularity skyrocketed, corporate firewalls - intended to keep dangerous traffic from entering - were routinely configured to admit popular and relatively harmless HTTP traffic. Software developers saw this, and suddenly file transfers, audio, and even Java were being handled by HTTP. It has become the Net's Esperanto, rendering firewalls largely irrelevant in the process.
Now in its ninth year, HTTP isn't a great protocol. It certainly isn't innovative. But thanks to the power of network effects, it's what we'll be living with for a long time to come.
And what about the endpoints, the machines and software actually connected by the Net? It should come as no surprise that how we write software has not changed significantly over the last five years, either. Computer scientists of the 1960s lamented the fact that programming was more of an art than a science, devoid of the rigor of, say, civil engineering. Today the same complaint echoes through universities, development labs, and corporations. Programming seems immune to progress.
Not that people haven't tried. We've seen structured programming, functional programming, computer-aided software engineering, object-oriented programming - you name it - all with about the same result: if you're a really good programmer, you can still write really good programs, and if you're a poor programmer, you're still screwed.
Take Java. Sun has marketed it in two ways: as a new and improved programming language that will replace the dominant C/C++ and as a revolutionary new platform that can unite disparate hardware and operating systems.
That Java offers a significant advantage is not in itself very impressive. Many languages are better than C++, but they are ultimately undone by their steep learning curves. What makes Java unique is that if you already know C++, you can start programming in Java almost immediately. In short, Java proves the same point as Gigabit Ethernet: the only way to fight network effects is to join them.
The idea of Java as a universal platform is more pernicious. The dream has been with us for decades, yet every attempt to implement it has failed. There's a reason for different systems - an engineer and a writer have very different needs. The only way for software to be truly portable is to pretend such differences don't exist. There probably are some cases where this is acceptable - Web applets that provide simple functionality, for example. But for anything significant, diverse hardware and operating systems will continue to require diverse software.
Hardware doesn't have the same immunity to progress - it's more of a science, less of an art. But gains in speed and memory are now so predictable as to be unnoticeable. And in most other respects, the last five years of computing have seen little real change.
Fortunately, this trend may not last. With communication between computers becoming as important as computation within them, they are being forced to try to keep up with bigger data streams. No, I'm not talking about NCs. The answer isn't making computers dumber; it's making their pipes wider.
Part of the solution is intelligent memory: instead of divvying a computer's brain into two parts - the microprocessor and memory - then connecting them with a thin straw, you integrate the two pieces into one. Santa Clara's NeoMagic, for example, has developed intelligent memory chips for video and graphics. But general-purpose solutions - probably aimed at ever-smaller handhelds - are still on the lab bench.
VCSELs, vertical-cavity surface-emitting lasers, are a lot closer to the mark. These tiny lasers, which have become widely available only in the past year, are incredibly exciting. They bring the advantages of photonics - high bandwidth, low power - down to board and even chip level. Everyone is racing to put them to work.
A VCSEL is like a tiny, 10- by 10-micron-square laser that can easily be stuck on a silicon chip. Where before you might have had wide-ribbon cable connecting two computer boards, now you can use an array of VCSELs that communicate via light. Because VCSELs are so tiny, you can pack them in for incredible bandwidths - hundreds of Gbytes per second per square centimeter. And doing so requires much less power than driving conventional wires.
Until such innovations are really put to work, the fundamental events of the last five years are pretty limited: networking has locked in on its key standards. The fiber-optic backbone has joined the microprocessor on a steady, predictable climb. Processing speeds will double every 18 months. Bandwidth will quadruple every two years. Corporate planners can rest easy.
Network effects are depressing - they allow bad standards to win. But it's predictable, automatized progress that scares me most.
People talk about Moore's Law as if it's an amazing, protean force, a classic example of the fierce entrepreneurial spirit that Schumpeter celebrated. In fact, Moore's Law is everything Schumpeter feared. Capitalism, he argued, faces a central paradox: It can exist only while being driven forward by the creative, nonrational innovations of entrepreneurs. But with every innovation, capitalism expands the sphere of rationality. These two trends eventually become incompatible. Once "innovation itself has been reduced to routine," Schumpeter argued, capitalism will wither. If everything is predictable, and the entrepreneur has been displaced, then some kind of centrally planned economy works just as well.
Perhaps this is going a bit far: centrally planned economies have their own problems. But lock-in has made the last five years in technology as exciting as an old Soviet five-year plan.