Skip to main content

The Ancient History Of The Internet

December 2024
11min read

Though it appears to have sprung up overnight, the inspiration of free-spirited hackers, it in fact was born in Defense Department Cold War projects of the 1950s

The Internet seems so now, so happening, so information age, that its Gen-X devotees might find the uncool circumstances of its birth hard to grasp. More than anything the computer network connecting tens of millions of users stands as a modern—albeit unintended—monument to military plans for fighting three wars. Specifically, the Net owes its existence to Allied battle strategies during World War II, to the geopolitical pressures of the Cold War, and to preparations for the postapocalypse of nuclear holocaust (the never-fought “final war” with the Soviet Union).

 

This is not a lineage the cyberenthusiasts dwell on. An effusive profile of the father of the Internet in The New York Times in September 1994 skipped entirely the circumstances of his cyberpaternity, while an extended account of the birth of the Internet a month earlier in Newsweek mentioned U.S. military sponsorship in one tangential clause but said not a word about why the Pentagon funded the project in the first place. Perhaps these strange omissions—we’re almost tempted to say Strangelovian omissions—are understandable. Internet boosters have created an instant mythology, featuring a fiercely libertarian “hackers’ ethic” and the “freewheeling, untamable soul” of Cyberspace (to quote a recent paean in Time magazine). The G.I.—government issue—stamp seems to let some of the hot air out of the hype.

An open-minded recounting of the Internet story, however, still leaves room for individual medals all around, while affirming how once upon a time government, universities, and industry worked together to produce what the late Ithiel de Sola Pool of MIT called “the largest machine that man has ever constructed.”

As with most great advances in the history of ideas, there was no one defining Internet event. No apple fell on a cyber-Isaac Newton. Nor did any visionary set out to build a new communications medium. Rather it began with a modest analytical system, devised early in World War II, that set the stage for the supportive research environment and the key technical developments that produced today’s global network.

 
The Pentagon needed to find a way to communicate after a nuclear war. The solution was a network that could bypass damage.

The analytical system, called operations research, applied scientific modeling principles to military planning. The first O.R. was done for the Allies by military scientists and civilian technologists. These boffins (as the British called them) conducted statistical studies of antisubmarine tactics that showed how the Allies could increase the U-boat kill rate by setting the charges to explode at a different depth. O.K. also devised a way to coordinate radar-operated antiaircraft batteries with the flight patterns of friendly interceptor aircraft, to avoid shooting down Allied fighter planes. Modern warfare, it became obvious, was too complex to be left to intuition; measurement and mathematical analysis were required. (Hitler, relying on a dream he had in which he learned that no German V-2 rocket would ever reach England, critically delayed the Nazi missile development program. Many Allied troops and British civilians owe their lives to his unscientific decision.)

To conduct such analyses, the military sought more powerful calculating devices. In 1944 Howard Aiken, a Harvard physics instructor, unveiled the Automatic Sequence Controlled Calculator, which he nicknamed the Harvard Mark I. Almost immediately this immense machine—more than fifty feet long, containing 750,000 parts, and weighing thirty-five tons —was put to work factoring ballistics tables for the Navy. Meanwhile, Army-funded engineers at the University of Pennsylvania worked on a machine to calculate artillery trajectories. Their handiwork, ENIAC (Electronic Numerical Integrator and Computer), represented a major development in computing technology, though not one that helped the Allied effort; it was delivered just weeks after the war’s end.

Following the victories in Europe and Japan, American military planners turned attention to their new Cold War adversaries, primarily the Soviet Union but also China (known then as Red China). The three U.S. military services contracted out O.R. work to universities and nonprofit corporations. This produced, among others, the Center for Naval Analysis, administered by the Franklin Institute, in Philadelphia; the Army-backed Operations Research Office, run by Johns Hopkins University; and, perhaps the most effective of all, the RAND Corporation, the Air Force’s principal advisory organization. Initially a technical adjunct of the Douglas Aircraft Company, of Santa Monica, RAND separated from the plane maker in 1948 and was incorporated under California law as a nonprofit company (the name is an acronym for Research and Development). Its initial budget of three million dollars came largely from the Air Force. According to Bruce Smith, a Harvardtrained political scientist who worked at RAND in the 1960s, the Air Force, the newest and least tradition-bound service, was able to “experiment more easily with novel organization forms.”

On Friday evening, October 4, 1957, RAND’s analysts, along with Pentagon officials and the American public, were jolted upright by Sputnik I . The Soviet Union followed Sputnik I with another satellite carrying the dog Laika. No matter that Laika blasted into orbit on a one-way ticket; America had expected to be first into space. The nation’s image as a technology superpower and its perceived lead in the Cold War were badly shaken. Most frightening of all, its cities suddenly seemed vulnerable to Soviet attack. One of the authors of this article was a young hotshot reporter covering the Pentagon for the International News Service during the Sputnik frenzy. He still remembers the overheated lead of his Saturday “follow story,” played prominently by newspapers around the country: “The same Soviet rocket that sent a satellite into orbit Friday can deliver an ICBM warhead on New York and Washington….”

Everything went on the table for panicky review. In hopes of producing graduates who could outthink the Soviets, high schools and colleges boosted their math and science requirements. The president of Harvard, James Bryant Conant, told parents to admonish their children, “For your own sake and for the sake of the nation, do your homework.” The “space race” (a.k.a. the “missile gap”) also affected university budgets. The Defense Department created yet another O.K. group, the Advanced Research Projects Agency, and charged it with doling out high-tech research funds.

Among ARPA’s first priorities were projects on command, control, and communication, known among war planners as C3. The Defense Department wanted to use computers not only in the Pentagon but also in the field. Bulky, balky mainframes of the era were ill suited for the battlefield, so ARPA sought a communications solution. For signals sent from a battlefield terminal to reach a headquarters-based computer, they would have to be translated from wire to radio to satellite and back. Nothing like it had ever been done before. In fact, most computer time-sharing then involved transportation rather than communication: Computer scientists keyed their jobs onto paper tapes or punch cards and then shipped them to the closest computing center.

At the same time, America’s command posts were burrowing underground in the name of C3 and “nuclear survivability.” NORAD, the air defense headquarters, carved a control center into the side of a Colorado mountain. In Washington nuclear-war plans called for evacuating the President and key officials to supersecret reinforced shelters in the Catoctin Mountains in nearby Maryland, while all 535 members of Congress were supposed to hole up in an elaborate complex under the grounds of the Greenbrier Hotel in White Sulphur Springs, West Virginia. From these subterranean hideouts, federal officials would govern the nation—that is, the parts that survived.

 
The network would enable researchers to share the few supercomputers of the era, so the government needn’t keep buying more.

The war-planning needs of the military and the research interests of computer scientists began to converge. The Pentagon asked RAND to analyze how the military could communicate (by voice telephone as well as data hookups) after a nuclear war. The existing phone network seemed far too fragile for such a task. For each call, switches in the network created a circuit between the two parties; if part of the circuit was broken, whether by an ICBM or by an errant backhoe, the connection had to be re-established from scratch.

Rand’s solution, developed by Paul Baran on an Air Force contract, was a network that could route around damage and continue to communicate. In such a system, Baran wrote, “there would be no obvious central command and control point, but all surviving points would be able to re-establish contact in the event of an attack on any one point” through a “redundancy of connectivity.” The key to creating this survivable grid was what later came to be called packet switching.

With packet switching, as Baran and others envisioned it, computers would not monopolize a circuit for the duration of their communication, as telephones do. Instead the messages would get broken up into small packets, which would flow in an intermingled stream with other packets, each of which would carry enough information to seek out its destination. Packets from a single message might take different paths to reach the destination. If one packet didn’t get through, the addressee would notify the sender to retransmit it. Then, when all the packets had arrived, the addressee would reassemble the message. The approach would be slower than having a dedicated circuit between the two points, but it would be far sturdier. If one connection broke, messages would reroute themselves. The “smarts” of the system would reside in users’ computers and in the packets themselves, not in centralized, vulnerable switching centers.

Baran, at RAND, did the basic research on packet switching, but many of his reports were classified. Donald Davies of the National Physical Laboratory in Britain independently outlined the same general concept and contributed the word packet for the message components. Other researchers also began to focus on the idea of a packet-switching architecture.

It was an idea that appealed to ARPA, particularly its Command and Control Research Office, headed by a computer scientist named J. C. R. Licklider. ARPA in the 1960s became the patron of computer research, a Medici to the mathematical Michelangelos. The agency funded research into countless aspects of hardware and software development, including graphies, simulations, head-mounted displays, parallel processing, and networking. ARPA grants produced the most powerful computer of the mid-1960s, the University of Illinois’s ILLIAC IV, as well as nearly all artificial intelligence research in the 1960s. “Far from [our] being evil warmongers,” the computer scientist Eugene Miya has somewhat defensively said, “Some neat work was done.”

Miya and other hackers (the word then carried no negative connotations) were in deep denial, trying to insulate themselves from the currents of dissent about the Vietnam War sweeping across many campuses. Although their work was funded almost entirely by the “villainous” Pentagon (one of the most prominent figures of the 1990s digital revolution told us that 95 percent of his budget came from the military during his lab’s critical early years), the computer scientists continued to insist that ARPA funding didn’t make them part of the military-industrial complex. “I like to believe,” the computer scientist Alan Perlis later said, “that the purpose of the military is to support ARPA, and the purpose of ARPA is to support research.”

As part of its research support, ARPA agreed to fund an experimental computer network. The network, ARPA officials hoped, would demonstrate the feasibility of remote computing from the battlefield as well as test the potential of a post-World War III military communications network. In addition, the network would enable widely dispersed researchers to share the few supercomputers of the era, so that the Defense Department wouldn’t have to buy one for every contractor. In 1968 ARPA solicited bids for an expandable network linking four sites already conducting ARPA research: the University of California campuses at Los Angeles and Santa Barbara, the Stanford Research Institute (SRI), and the University of Utah.

While the bids were continuing to come in, a handful of representatives of these proposed ARPAnet nodes met to discuss what lay ahead. “We had lots of questions,” recalled Stephen D. Crocker, at the time a UCLA graduate student. People wondered how the computers would be linked and what they would be capable of doing. “No one had any answers, but the prospects seemed exciting,” he remembered. The men decided to hold more meetings. The Network Working Group, as they dubbed themselves, proved as fluid and non-hierarchical as the Internet itself would ultimately be; an early memo prefaced a list of group members by saying that “the Network Working Group seems to consist of. … ” “We had no official charter,” said Crocker. “Most of us were graduate students, and we expected that a professional crew would show up eventually to take over. …” Of course there were no seasoned veterans; the students and professors had to be their own crew.

 
People worried that sending personal messages might somehow violate law. Soon, however, a student hacker mentality took over.

The ARPAnet construction contract was awarded to Bolt Beranek & Newman, a Cambridge-based research firm with close ties to MIT. BBN shipped the new communications software in August 1969 to UCLA and then to SRI in October. At a November demonstration the two California machines exchanged data. The first long-distance packet-switched network was in operation. By the end of the year, all four nodes were on-line.

At this point the striking figure of Vinton Cerf, the computer scientist The New York Times called the father of the Internet, begins to take a leading role in the narrative. Born in 1943 in New Haven, Cerf turned his back on Yale to do his undergraduate work in mathematics at Stanford and to get his master’s and doctorate in computer science from UCLA. In 1969 Cerf was a graduate student working at UCLA’s Network Measurement Center, observing how the new fournode ARPAnet was functioning—and what it would take to make it malfunction. “There were many times when we would crash the network trying to stress it,” Cerf recalled.

Soon he was collaborating with Robert Kahn, an MIT math professor on leave to work at BBN. Cerf and Kahn developed a set of software “protocols” to enable different types of computers to exchange packets, despite varying packet sizes and computer clock speeds. The result, TCP/IP, was released in 1973 (by which time Cerf was teaching at Stanford). TCP —Transmission Control Protocol—converts messages into packet streams and reassembles them. IP—Internet Protocol—transports the packets across different nodes, even different types of networks. Just as TCP/IP stands for a whole “suite of protocols,” not just those two, so were there several fathers of the Internet; Cerf credits many people, “thousands by now,” for helping create the computer-network communications system we’ve come to know.

In 1977, having left Stanford for ARPA (then called DARPA, the D for “Defense” added in 1972), Cerf worked on a different sort of interconnectivity. From a van cruising along a Bay Area freeway, a computer sent messages that traveled, by packet radio, satellite, and landlines, a total of ninety-four thousand miles. “We didn’t lose a bit!” Cerf later recalled. The project demonstrated that computers could communicate to and from the battlefield. No longer was ARPA funding pure computer science research; now DARPA insisted on what Cerf termed “militarily interesting” projects like this one. Even so, Cerfs C3 innovation arrived as the Cold War was flagging—reminiscent of how ENIAC had been delivered at the end of World War II.

Cerf has suffered severe hearing loss since birth and has worn a hearing aid since he was fourteen. It’s serendipitous but fitting, then, that his TCP/IP made possible the textbased Net communications systems so popular today, including electronic mail, discussion lists, file indexing, and hypertext. E-mail, of course, is the most widely used of the Net services, the most convenient and the most functional.

Ray Tomlinson of BBN is credited with inventing the software and sending the first e-mail messages across ARPAnet in 1972 and 1973. At first scientists used e-mail to collaborate on research projects; their computer talk was decorous, befitting a serious O.R. project that had had its origins in Soviet-American military rivalries. There were also rules to obey. ARPA limited use of the network to official business. In addition, some users worried that sending personal messages by e-mail might somehow violate the postal laws. “You’ll be in jail in no time,” RAND’s Paul Baran warned his colleagues.

Soon, however, a graduate-student hacker attitude took over. Mailing-list software permitted large groups of people to discuss common interests, making e-mail a mass medium as well as a point-to-point one. The first list, SF-LOVERS , linked science fiction fans. “ARPA was fairly liberal … but they did occasionally put their foot down,” Bernie Cosell, an early ARPAnet user, later recalled. When ARPA brass complained, SF-LOVERS was shut down—only to rise again a few months later, after users had managed to convince ARPA that the mailing list was serving the vital purpose of testing the network’s mail capacity. Soon the network was carrying NETWORK-HACKERS, WINETASTERS , and scores of other mailing lists. ARPAnet had come a long way from C3 and survivability. The science fiction writer Bruce Sterling captured the image best: It was “as if some grim fallout shelter had burst open and a full-scale Mardi Gras parade had come out.”

As one writer put it, the “fallout shelter had burst open and a … Mardi Gras parade had come out.”
 

By the mid-1980S TCP/IP was linking ARPAnet to other networks, including the NSFnet of the National Science Foundation, another federal agency, and Usenet (see box on page 38). The result was first called ARPAInternet and then simply the Internet. ARPAnet split in two, with military communications going onto MILNET and the computer researchers finally over taking ARPAnet in name as well as in practice. ARPAnet shut down in 1990, and NSFnet went off-line last April; the most heavily traveled routes of the information superhighway now are in private hands. Nearly all the various networks used the TCP/IP language. “I take great pride in the fact that the Internet has been able to migrate itself on top of every communications capability invented in the past twenty years,” Cerf told Computerworld in 1994. “I think that’s not a bad achievement.” At a major computer convention Cerf, a natty dresser who favors three-piece suits, once disrobed to display the message on his T-shirt: IP ON EVERYTHING .

More elegantly he wrote hacker poetry. When ARPAnet was decommissioned in June 1990, scarcely anyone noticed; other elements of the Internet seamlessly took over all its functions. Cerf wrote a “Requiem for the ARPAnet.” It ends: “Now pause with me a moment, shed some/ tears./ For auld lang syne, for love, for years and years/ of faithful service, duty done, I weep./ Lay down thy packet, now, O friend, and sleep.”

AND THEN THERE WAS USENET

We hope you enjoy our work.

Please support this 72-year tradition of trusted historical writing and the volunteers that sustain it with a donation to American Heritage.

Donate