Skip to main content

Behind the Cutting Edge

Back To The Future

December 2024
5min read

Beyond the myth of ever-faster high-tech change and radical new breaks from the past

“We have reached the epoch of the nanosecond. This is the heyday of speed … a culmination of millennia of evolution in human societies, technologies, and habits of mind.” So writes the science journalist James Gleick in his bestselling book Faster: The Acceleration of Just About Everything . It’s a theme we keep hearing lately: Technological change just keeps getting faster, and it speeds up everything else in life with it.

Is this really true? Real perspective on technological change comes only from stepping back and taking a larger view—which will be a main theme of “Behind the Cutting Edge.” So let’s briefly look back a hundred years. What was the pace of change then? In 1900 electrical lighting and manufacturing had transformed both the home and the workplace within two decades, remaking a principally agricultural economy into an urban, industrial one and the household into a place where a flick of a switch made night day. The telegraph and then the telephone had been knitting the world into a web where information traveled instantly and required immediate response, so that the broker in Chicago had to know the day’s price of cotton or corn both in New York and in Liverpool. A transportation revolution on rails, first steam and then electric, had been followed by one on bicycles, and now an automobile industry was springing up and enter ng a decade in which its three competing technologies—steam, electric, and internal combustion—would fight to the death for dominance. Commuting to work and shopping in department stores and over the phone had newly become possible. Travel across the continent had been reduced from deadly months to safe and comfortable days. Indoor plumbing and hot water, which most of us can’t imagine life without, were just becoming commonplace. Life had altered unrecognizably from the agrarian existence of a century before—more, in many ways, than it has changed since.

The one way it was most unchanged was in health. Life expectancy in the United States in 1900 was still only forty-seven years; today it is around seventy-seven. Doctors were still coming to accept the germ theory of disease. Anesthesia and vaccination and disinfection had arrived, though in relatively primitive form; beyond that, most treatment of illness was little better than in the Middle Ages. The medical miracles of the century since are too numerous to mention, and the benefits to our health of modern sanitation, clean water, and hygiene are no less important.

It is fair to say that the medical and health revolution over the past century was greater than the information one. After all, the world was already drawn together by instant communication, and if information technology is still changing all of our lives, many of us have our lives only because of what has happened in health and medicine. To put it another way, one of the things we have undeniably gained from our technologies is the very commodity that Gleick sees deserting us—time.

If rapid, unsettling technological change is nothing new, there is another side of the coin too. Much of what we think of as cutting edge, as new and unprecedented, has fundamental elements that reach back surprisingly far. Grasping this fact is essential for appreciating and understanding the way the world changes—and doesn’t change—around us. Another of the aims of this column will be to explore that aspect of our new technologies: how the ast defines and explains even the most novel and advanced of them.

This was the great irony of the Y2K problem. While editorial writers and commentators everywhere were remarking how at the dawn of a new century everything was new, a computer anachronism from the dark ages of the 1950s and 1960s was waiting to leap from the past and overthrow all the systems that keep tomorrow’s world together. It didn’t turn out that way, of course; after billions of dollars were spent averting disaster, we all woke up on New Year’s morning with an even greater sense of anticlimax than usual. But the persistence of that old computer code was a rebuke to the cries that, as a recent advertisement for NEC computers put it, “Change is the only constant.”

Y2K was hardly alone. Consider this: Of the three basic tools with which you interact with your computer—the keyboard, the monitor, and the mouse—two, the first two, got their basic designs most of a century before the seeds of Y2K were planted.

Your keyboard was worked out by a man named Christopher Sholes around 1870, when he was developing the first successful typewriter. He started out with an essentially alphabetical lineup of keys, but then he started separating the keys for letters that were often struck in succession, like s and t , because if they were adjacent, they’d be likely to jam. The result was the QWERTY keyboard, not a very rational pattern, especially since it left one of the most common letters, a , under the weakest finger on either hand—although Sholes couldn’t have known that. He didn’t foresee touch-typing.

Your keyboard was worked out around 1870. The shape of your computer screen came out of Edison’s lab.

QWERTY became standard and stubbornly remains so. A professor of education at the University of Washington named August Dvorak invented what he believed was a faster keyboard in the 1930s, but it has never gained popularity. It’s probably not even faster; the best case for its superiority was made by a series of studies conducted for the Navy in 1944 by none other than Lt. Comdr. August Dvorak, and they were full of possible biases. A meticulous 1956 study showed no strong advantage for Dvorak, certainly none that could justify the enormous costs of retraining typists and replacing keyboards everywhere.

QWERTY has obvious deficiencies for use with your computer today—for instance, you can’t type the accented letter é without memorizing a combination of keystrokes or looking up a chart of symbols on your computer. But its biggest disadvantages have all been addressed over time, and it’s hard to imagine that it won’t still exist a century from now.

Likewise the shape of your computer screen. It came out of Thomas Edison’s laboratory, though Edison would surely be as surprised as anyone to learn that. It was conceived by a young laboratory assistant named William Kennedy Laurie Dickson during the invention of motion pictures.

Dickson started out trying to record movies the way Edison captured sound, on a cylinder. This proved impossible since the pictures would have to be microscopic, so he gradually increased the image’s size and developed moviecamera machinery to work with it. By December 1891 he was experimenting with the latest celluloid film and was settling on strips 1 3/8 inches wide and fifty feet long. He then decided on an image one inch wide and three-quarters of an inch high—the proportions of your computer screen.

Why that ratio? “He never said why in his letters,” says Paul Spehr, a motionpicture historian who is writing a biography of Dickson. “But he was first of all a photographer. He had been Edison’s photographer for seven or eight years. He had a strong sense of composition and was accustomed to photos where the bottom was always a little shorter or longer than the vertical. It was an aesthetic decision.

It was obviously a good one, for as more and more people got into making movies, they kept adopting Dickson’s standard. When French companies started manufacturing movie film, they translated 1 3/8 inches into its metric equivalent—thirty-five millimeters. When Hollywood started making movies, it picked up the standard. Then in 1941 the National Television Standards Committee laid down the rules for commercial TV and chose the same shape, so that television sets could play movies and work with existing film equipment. (Movies changed to wide-screen in the 1950s only to be different from TV.) And when people started building computer monitors, they naturally used TV screens, and Dickson’s format lived on yet again.

As with your computer keyboard, your computer monitor has its limitations. Most likely your two main uses for the computer are to make documents to print out on 8 ½-by-11 paper and to see Web pages that are much longer than they are wide. You might be able to do either thing better with a vertical screen. But like your keyboard, your monitor had its major deficiencies addressed long ago, in this case with scroll bars and HOME and END keys. So there’s no clear reason why the format should change any time soon, even if it was conceived for an entirely different purpose more than a century ago. And it probably won’t.

Both your keyboard and your monitor took shape in an almost unrelated context in a vastly different time—your keyboard an attempt to work around mechanical difficulties that haven’t existed for more than a hundred years, your monitor an aesthetic choice by someone used to getting pictures with glass plates and bellows. But both Sholes and Dickson made choices that were good enough to last. And generations since have adapted to and revised those choices, altering them the very little bit they needed to—and not one bit more or one moment sooner. Which is exactly what happened with Y2K.

The epoch of the nanosecond reaches back a long, long way. And sometimes it holds on tenaciously.

We hope you enjoy our work.

Please support this 72-year tradition of trusted historical writing and the volunteers that sustain it with a donation to American Heritage.

Donate