Skip to main content

The Federal Debt

December 2024
31min read

And how it grew, and grew, and grew…

The federal government was still in the process of establishing itself in 1792 and did not have a good year financially. Total income was only $3,670,000, or 88 cents per capita. Outlays were $5,080,000. The budget deficit therefore amounted to fully 38 percent of revenues. The next year, however, the government sharply reduced expenses while enjoying increased tax receipts and showed its first budget surplus. Except during periods of grave economic or military crisis, the government would never again run up so large an annual deficit in terms of a percentage of total revenues.

Not, that is, until the peaceful and relatively prosperous year of 1992. That year the federal government had revenues of $1.076 trillion and outlays of $1.475 trillion, a budget deficit equaling 37 percent of revenues. Everyone, conservative and liberal alike, agrees that something is terribly wrong with how the United States government conducts its fiscal affairs today. The last eighteen years of the nation’s history have been marked by a more than 25 percent increase in federal revenues (in constant dollars) and the collapse of our only significant external military threat. Yet in those years the United States has spent as much of tomorrow’s money as we would have spent fighting a major war or new Great Depression. That will have no small consequences if tomorrow we actually have to fight one.

How did the world’s oldest continuously constituted republic lose control of so fundamental a responsibility as its own budget? The answer is, as with most governmental policy disasters in a democracy, one innocuous step at a time. While politicians, economists, and many others pursued their self-interests, the national interest largely got lost in the shuffle.

Over the last sixty years five trends have increasingly affected government fiscal policy. First, a powerful but fundamentally flawed concept in the discipline of economics has completely changed the way both economists and politicians view the national economy and their responsibilities toward it. Second, the responsibilities of government in general and the federal government in particular, as viewed by the public, have greatly increased. Third, a shift in power from the Executive to Congress has balkanized the budget process by sharply limiting the influence of the one politician in Washington whose constituency is national in scope, the President. Fourth, the decay of party discipline and the seniority system within Congress itself has further balkanized the budget process, dividing it among innumerable committees and subcommittees. This has made logrolling (you vote for my program and I’ll vote for yours) the order of the day on Capitol Hill. Finally, the political-action-committee system of financing congressional elections has given greatly increased influence to spending constituencies (often called special interests, especially when they are funding someone else’s campaign) while sharply reducing that of the electorate as a whole, which picks up the tab.

 

The result is a budget system that has become ever more heavily biased toward spending. As a consequence, the national debt has been spiraling upward, first only in absolute numbers and then, in the last twelve years, as a percentage of the gross national product as well. Today it stands at about 68 percent of the annual GNP, higher than it has ever been in peace-time except in the immediate aftermath of a great war.

To be sure, a country as rich and productive as the United States can well afford to service its present debt. But the current trend is ominous, to put it mildly. Just consider: In the first 204 years of our independence, we took on the burden of a trillion dollars of debt, mostly to fight the wars that made and preserved us a nation. In the last fifteen, however, we have taken on four trillion more for no better reason, when it comes right down to it, than to spare a few hundred people in Washington the political inconvenience of having to say no to one constituent or another.

 
A debt crisis forced the drafting of the Constitution in 1787. Its authors put few restrictions on taxing, borrowing, and spending.

Adam Smith’s Good Housekeeping

The new United States emerged from the Revolution sovereign but in a state of fiscal chaos. The Continental Congress had been forced to resort to printing fiat money, the so-called continentals that sank quickly into worthlessness. The various states had borrowed heavily to meet the demands of the war.

The central government under the Articles of Confederation was financed solely by contributions from the various state governments (just as the United Nations is funded today) and had no power to tax or borrow on its own authority. Because the state governments had pressing needs of their own (as governments always do), their contributions were often late and sometimes nonexistent. As a result, the central government had grave difficulties meeting even its current obligations.

It was this financial crisis that helped force the drafting of the new Constitution in 1787. The document that the Founding Fathers created that summer in Philadelphia—the desperate poverty of the old government all too fresh in their minds—put remarkably few restrictions on the new government’s power to spend, tax, and borrow.

The federal government is required to maintain such things as the post office and the census, which necessarily require spending, and Congress may not make military appropriations extending more than two years. But it is empowered to appropriate money for the “general welfare,” a term left undefined. In the twentieth century it has come to be construed so broadly as to encompass a museum dedicated to the memory of Lawrence Welk.

Taxes merely had to be uniform throughout the United States and could not be laid on the exports of any state. The power to borrow, meanwhile, was entirely unlimited, one of the very few powers granted by the Constitution that had no checks or balances.

What did limit the fiscal powers of the new government was the universal consensus, among ordinary citizens and the political elite as well, about the proper and prudent way for a government to act when it came to taxing, spending, and borrowing. This consensus was best summed up, as you might expect, by Adam Smith in The Wealth of Nations . “What is prudence in the conduct of every private family,” he writes, “can scarce be folly in that of a great kingdom.” In other words, governments should finance current expenditures out of current income, save for a rainy day (or, more properly, allow the people to do so by lowering taxes when the budget is in surplus), borrow only when inescapably necessary, and pay back borrowed money as quickly as possible.

Alexander Hamilton, appointed by President Washington to be the first Secretary of the Treasury, moved swiftly to put the new government’s fiscal house in order. Taxes were laid. Mostly excise taxes on products like whiskey and duties on imports, these were intended both to fund the new government and provide a revenue stream to service and reduce the new national debt. This debt in turn funded the redemption of the old Revolutionary War debt on a sound basis.

At the beginning the national debt amounted to $80 million, something on the order of 40 percent of the gross national product of the day. But as the government found its fiscal feet after 1795, it ran a deficit only twice until the War of 1812. As the country’s economy rapidly expanded, the debt declined in both relative and absolute terms. By 1811 the total debt was only a little more than half what it had been in 1795.

The war, of course, sharply reversed matters. Federal government outlays in 1811 were a little more than $8 million. By 1814 they were more than $34 million. Meanwhile revenues suffered as the ever-tightening British blockade cut sharply into import duties, the main source of government income at the time. In 1814 outlays exceeded revenues by 211 percent.

Hamilton had intended that the Bank of the United States, which he established, should finance deficits incurred by the government, but its charter had expired in 1811, the victim of politics. The government now found itself hard pressed to raise loans to finance the war, because the country’s financial markets were still in their infancy and unable to handle the large sums required.

The country’s affluent were approached directly, and many responded. John Jacob Astor, already America’s richest citizen, subscribed to $2 million worth of government paper. (He drove a very hard bargain, buying the bonds only at a steep discount from their face value. The government, of course, had little choice but to go along with the demands of someone who could easily singlehandedly fund 2 percent of the entire national debt.)

Jackson Does Away With the Debt Once and for Once

By 1815 the debt stood at $127,335,000, a level it would not see again until the Armageddon of the Civil War. For when peace was again established, the government again determinedly whittled away at that debt. By 1829 it had been reduced to less than $50 million.

When Andrew Jackson entered the White House that year, he decided as a matter of deliberate policy to rid the federal government of debt entirely. By the end of 1834 he was able to announce that he had succeeded. The last of the debt would be discharged, Jackson wrote to Congress in the State of the Union message that year, and the Treasury would have a positive balance of $440,000 on January 1, 1835.

Jackson left no doubt just how important he thought discharging the debt was, equating it with peace itself. “Free from public debt,” the President wrote, “at peace with all the world … the present may be hailed as the epoch in our history the most favorable for the settlement of those principles in our domestic policy which shall be best calculated to give stability to our Republic and secure the blessings of freedom to our citizens.” Praise for Jackson’s action on the debt was universal. Roger B. Taney, the Chief Justice, wrote the President that the extinction of the debt was, as far as he knew, unique in the history of nations. Indeed, Jackson’s achievement remains singular today.

The Democratic party, for its part, decided to take advantage of the fact that January 1835 was also the twentieth anniversary of Jackson’s sweeping military victory over the British at the Battle of New Orleans. It held a banquet to celebrate the two triumphs, although Jackson modestly refused to attend, sending the Vice President in his stead. “New Orleans and the National Debt,” the Washington Globe wrote,”—the first of which paid off our scores to our enemies , whilst the latter paid off the last cent to our friends .”

But Jackson’s hope of a debt-free federal government lived only briefly. Shortly after he left office, the country plunged into depression, one of the most severe of the nineteenth century. Revenues, which had reached $50,827,000 in 1836, shrank to $24,954,000 the following year. Until the depression lifted in 1843, the government would have only one year of surplus as the debt climbed back up to $32 million. The Mexican War then caused a further rise, to $68 million.

It was in 1847, during the Mexican War, that Congress for the first time altered the practice of appropriating specific amounts of money for each expenditure it authorized. Instead it empowered the Treasury to pay all interest and principal on the national debt as it came due, regardless of the amount paid out. This raised little comment at the time. After all, there was no choice about paying the money if the ability to borrow at reasonable rates was to be maintained, and it saved Congress the trouble of passing a specific bill every year.

For many years this was the only federal spending that was put on what a later age might call automatic pilot. In the twentieth century, however, Congress would resort more and more to this so-called backdoor spending, until it became one of the prime reasons the budget went out of control.

After the Mexican War the peace and prosperity of the early 185Os allowed the debt to be cut in half. Then a new depression struck in 1857, and the debt moved back up until, at the end of 1860, it amounted to $64,844,000. Only one year later it reached $524,178,000 and was rising at a rate of well over a million dollars a day.

Modern Funding for Modern Wars

The Civil War was by far the largest war fought in the Western world between the end of the Napoleonic era and World War I, and its cost was wholly without precedent. To pay for it, the federal government moved to tax nearly everything. Annual revenues, which had never exceeded $74 million before the war, were $558 million by 1866 and would never again drop below $250 million.

But revenues did not come anywhere near to matching outlays, especially in the early years of the war. In fact, 1862 would be the worst year ever—so far—for spending in excess of income: The deficit amounted to an awesome 813 percent of revenues (almost four times the worst year of World War II). Radical new methods were needed to meet the emergency.

It was a Philadelphia banker, Jay Cooke, who invented the means by which modern wars have been financed ever since: the national bond drive. Offering bonds in amounts as small as fifty dollars, Cooke peddled them retail to hundreds of thousands of ordinary citizens, most of whom did not even have bank accounts. So successful was he that by the end of the war he was actually raising money faster than the War Department could spend it.

By 1866 the national debt stood at a then-staggering $2,755,764,000, no less than forty-two times what it had been only six years earlier. Once again, however, the emergency over, the government began doggedly to pare it down, running a surplus of 7 percent in 1866, and would not have a deficit year—through good times and bad—until the severe depression of the 189Os produced one twenty-eight years later. By then the national debt had been reduced by nearly two-thirds in absolute dollars. As a percentage of the rapidly expanding GNP, it declined at an even faster rate, to well under 10 percent.

The same pattern repeated in World War I and its aftermath. The debt rose by a factor of twenty during the war and was reduced by more than a third in the 1920s. But again government revenues and outlays moved to a new, permanently higher plane, as they have after every great war in our history. With the exception of 1865 the government never spent even close to a billion dollars in one year until 1917. Since that year it has never spent less than $2.9 billion.

Still, the old consensus held. Andrew Mellon, Secretary of the Treasury for most of the 1920s, explained that “since the war two guiding principles have dominated the financial policy of the Government. One is the balancing of the budget, and the other is the payment of the public debt. Both are in line with the fundamental policy of the government since its beginning.”

 
FDR had no trouble discerning that new spending programs were politically popular while new taxes most emphatically were not.

But it was not to hold much longer. In the 139 years encompassing the period from 1792 to 1930, the federal government ran a surplus ninety-three times and a deficit forty-six times. Beginning with 1931, however, we have had a surplus in only eight years and a deficit for the rest (except in 1952, when spending exactly matched revenues).

FDR Changes the Rules

What happened? One answer, of course, was the Great Depression. World trade collapsed, corporate profits vanished, the incomes of the rich—the only people to feel the personal income tax in those days—steeply declined, and government revenues plunged. More than $4 billion in 1930, they were less than $2 billion in both 1932 and 1933. Meanwhile, government outlays rose sharply as cries for federal relief funds became undeniable: $3.3 billion in 1930 and $4.7 billion in 1932, when the deficit amounted to 142 percent of revenues, by far the worst peace-time deficit in the nation’s history.

Herbert Hoover, true to the old wisdom, tried desperately to do something about the mounting deficits. In 1932, in the teeth of both a re-election campaign and a still-collapsing economy, he pushed a large tax increase through Congress to help balance the books, an act that not only deepened the depression further but ensured his overwhelming defeat in November.

Franklin Roosevelt largely based his presidential campaign that year on lambasting Hoover’s fiscal mismanagement. “Let us have the courage to stop borrowing to meet continuing deficits….” Roosevelt said in a radio address in July. “Revenues must cover expenditures by one means or another. Any government, like any family, can, for a year, spend a little more than it earns. But you know and I know that a continuation of that habit means the poorhouse.”

No sooner was he in office himself, however, than Roosevelt made an unbalanced budget a matter of deliberate policy for the first time in the history of the Republic. His advisers quickly convinced him that “passive deficits,” not profligate spending, were, under the circumstances, good policy. They should be tolerated, the advisers thought, because any attempt to balance the budget would only make matters worse, as Hoover’s taxes had, and might even threaten domestic stability.

In any event, despite his campaign rhetoric, Roosevelt—who possessed in spades the gut political instincts that Hoover completely lacked—was not about to continue the policies that had destroyed Hoover’s Presidency. He had no trouble discerning that new spending programs were politically popular while new taxes most emphatically were not. Thus the extraordinary conditions of the 1930s allowed Roosevelt to institute an array of new federal programs, doubling federal spending between 1933 and 1940, while raising taxes only on what was left of the very rich, who saw their income tax rates increase sharply.

These spending programs proved enduringly popular even as better times began to return. So the increased tax revenues the improved economy brought were applied largely to extending the new social safety net, not to balancing the budget, still less to reducing the debt. Furthermore, the percentage of the gross national product that passed through Washington began to climb sharply. Federal outlays amounted to 3.7 percent of GNP in 1930. By 1940 they were 9.1 percent. It is perhaps not going too far to say that Roosevelt changed the country’s perception of the proper scope of the federal government’s responsibilities as much as the Civil War had changed the country’s perception of itself.

It was World War II, however, not New Deal programs, that finally ended the Depression. And needless to say, the war only increased the deficits. By 1946 the United States had run sixteen straight deficits. The national debt now stood at $271 billion, a hundred times what it had been at the end of the Civil War and almost seventeen times what it had been in 1930.

But now, for the first time after a great war, debt reduction was not the first object of federal budgetary policy. The most influential economist since Adam Smith, Britain’s John Maynard Keynes (Lord Keynes after 1942), had appeared on the scene. American government fiscal policy would never be the same again.

In the Long Run We Are All Dead

Before Keynes, economists had been largely concerned with what is now called microeconomics, the myriad individual allocations of resources that determine prices and affect markets. In effect, economics had been concerned with the trees. Keynes, however, looked at the forest, the macroeconomic phenomena of aggregate demand and supply.

Keynes argued, in one of his most famous aphorisms, that while these must balance out in the long run, it was equally true that in the long run we are all dead. In the short run, aggregate supply and demand often do not balance, with pernicious results. If demand outstrips supply, inflation occurs. If total demand is insufficient, depression results.

Keynes further argued that government could and should take an active role in affecting both aggregate demand and supply. When inflation threatens, Keynes thought, government can dampen demand by reducing the money supply, raising taxes, reducing government spending, or some combination of the three. Opposite government action could deal with an economic slowdown. The result, thought Keynes, would be a smoothly functioning economic system, permanently high employment, and low inflation.

Equally important, Keynes stood Adam Smith on his head with regard to debt. He argued that families and nations are different economic beasts altogether and that prudence for one could indeed be folly for the other. A family, Keynes said, must necessarily borrow from someone else, but a nation can borrow from itself, the debits and credits canceling each other out, at least macroeconomically. The national debt—that often necessary but always undesired evil of classical economics—therefore didn’t really matter.

There is no doubt that Keynes’s theory is a mighty work of a mighty intellect. Keynes published his seminal book, The General Theory of Employment, Interest, and Money , in late 1935, and it had an immense impact throughout the intellectual world. It is not hard to see why. Like Adam Smith and unlike all too many other economists, Keynes commanded the English language. Moreover, his theory appeared to solve many puzzles regarding how the Great Depression had come about and why it lingered so long. But as a prescription for handling the economy in the future, it has proved to have at least three fatal flaws.

The first is that Keynes still viewed the economic universe as economists had always viewed it, as a machine. Economics became a discipline in the eighteenth century, when Sir Isaac Newton’s intellectual influence was overwhelming. As a result, economists from Adam Smith on have looked to the Newtonian clockwork universe, humming along in response to immutable laws, as their model for the economic universe.

At the end of the nineteenth century, an Englishman named Alfred Marshall, trained as a mathematician and physicist, created what Keynes—Marshall’s pupil at Cambridge—approvingly called “a whole Copernican system, by which all the elements of the economic universe are kept in their places by mutual counterpoise and interaction.” Marshall’s conception was self-regulating and inherently stable. Keynes substituted one that required an engineer—government—for maximum efficiency. Keynes’s model has dominated economic thinking ever since, despite the fact that even enormously expanded and refined, it has proved inadequate at best and often quite useless in predicting events in the real world.

The reason is simple enough. The unspoken assumption of the economy-as-machine paradigm is that a given action with regard to taxes, spending, or monetary policy will have a given result, just as putting more pressure on the gas pedal always makes a car move faster. Unfortunately the basic parts of an economy are not bits of metal obeying the laws of physics but human beings, often unpredictable and always self-interested.

So the cogs in the American economy—you, me, and 250 million other human beings—are capable of interacting in ways unexpected by economists using mechanical models. That’s why a 1990 tax on luxury boats and airplanes, which was supposed to raise $16 million, raised $58,000 instead. People simply stopped buying boats and airplanes. Rather than raise revenue, the new tax caused ten thousand layoffs. To use the car analogy again, this time stepping on the gas pedal didn’t make the car speed up; it made the oil pressure drop.

The second flaw in the Keynesian system is that timely and reliable information on the state of the economy is essential if politicians are to make correct policy decisions. But even in a world filled with number-crunching computers this is not to be had. Final figures of even so basic a statistic as GNP come out three years after the period they measure. Preliminary figures, to be sure, are available in a few weeks, but they are highly unreliable and subject to gross revision. It’s a bit like driving a car whose dashboard instruments tell you only what the situation was an hour earlier.

The third flaw in Keynes’s theory lies in human nature itself, a powerful force in the real world that Keynes totally ignored. For the Keynesian system to function, it must be applied dispassionately. Taxes must be cut and spending increased in bad times. In good times, however, taxes must be increased and spending cut. That, in a democracy, has proved to be politically impossible.

One problem, of course, is that depression is always recognized in real time, but prosperity, like happiness, is most easily seen in retrospect. The 1980s, for instance, are increasingly becoming remembered as a time of plenty in this country, a decade when the GNP rose by 35 percent in real terms. But the newspapers of the day were filled with stories about farmers losing their land, the big three auto companies being taken to the cleaners by the Japanese, and the first stock market crash in nearly sixty years.

In an economy as vast as that of the United States, recession is always going to be stalking one region of the country or one sector of the economy, even while the overall trend is upward. Living day to day, ordinary citizens, politicians, and economic reporters alike have a natural tendency to concentrate on the trees that have problems, not the forest that is thriving.

The flaws, of course, were not apparent in the beginning, only the theory’s promise of making a world without depression possible. Economists took to it immediately. Within a decade it was the overwhelmingly dominant school of economic thought.

But there was a second reason that Keynes so quickly swept the field among economists. It might be called the Madison Effect, in honor of James Madison’s famous dictum that “men love power.” After all, until Keynes, politicians had not needed economists. But Keynes made them indispensable, and economists knew it.

 
Keynesianism gave politicians intellectual justification for pursuing their self-interest in both high spending and low taxes.

“We Are All Keynesians Now”

Politicians took a little longer to come around. Those of Truman’s and Elsenhower’s generation, born in the last decades of the nineteenth century, had been raised in the classical tradition, and many had actually read Smith, Ricardo, and John Stuart Mill in their youth. By the 1930s these men were middle-aged and relatively unreceptive to new ideas, especially fundamental ones.

Also, the predictions regarding the postwar American economy by the Keynesians proved very wide of the mark, foreseeing unemployment when inflation turned out to be the major problem. And even Keynesians, using any number of different variations on the Keynesian economic model, gave contradictory advice. Harry Truman joked that what he needed was a onearmed economist, because the ones he had were always saying “on the one hand … but on the other hand.”

The politicians, however, were also fully aware of the sharply different political fates of Hoover and Roosevelt. Passive deficits therefore were no longer questioned in times of recession, nor are they likely to be again.

And fully Keynesian notions began to creep in. In 1946 Congress passed the first Full Employment Act, committing government to actively seeking high employment in the national economy, something that would have been unthinkable twenty years earlier. That same year the President’s Council of Economic Advisers was created, within the White House itself, to offer the President options for handling the economy as a whole.

Still, between 1946 and 1960 there were seven years of deficit and seven of surplus, all but two of the deficits small ones. The fact that two of those years of surplus were during the Korean War demonstrates clearly that the idea of pay-as-you-go still had powerful political appeal. And the national debt, while it did not shrink in nominal dollars (in fact it rose from $271 billion to $290 billion), did shrink by nearly a third when measured in constant dollars. And the economy in these years grew swiftly. So the national debt, which had been nearly 130 percent of GNP in 1946, was less than 58 percent of GNP by 1960.

But if Keynesianism was largely an alien or at least uncongenial concept to those who served under Truman and Eisenhower, it had a powerful appeal for the new generation of politicians who came to power with John F. Kennedy in 1961. They had been educated during the Great Depression and its aftermath. Many had been taught to think economically in Keynesian terms (the first edition of Paul Samuelson’s thoroughly Keynesian introductory college textbook, which has sold in the millions, came out in 1948).

And again the Madison Effect exerted a powerful tug. Until Keynes the business cycle had been regarded as a force of nature, no more to be influenced than the tides and thus not within a politician’s venue. Now, however, there was an elegant theory that not only justified political manipulation of the economy as a whole but virtually commanded it. By enlarging the scope of legitimate political action, Keynesianism enlarged the power of politicians. By the end of the 1960s, even so basically conservative a politician as Richard Nixon was able to say, without fear of contradiction, “We are all Keynesians now.”

Moreover, politicians have a natural inclination to spend in general, even if they might disagree fiercely about what, specifically, to spend on. After all, it earns them the gratitude, and likely the votes, of the beneficiaries. Equally, they hate to tax and perhaps lose the votes of those who have to write bigger checks to the government. Under the old consensus, pleasing both halves of the body politic had been largely impossible, and politicians spent much of their time choosing between them and hoping they guessed right.

Keynesianism gave them an intellectual justification for pursuing their self-interest in both high spending and low taxes. It is little wonder that they did so. Constantly enlarging government spending to meet one more perceived need, they avoided higher taxes either by paying with the increased tax revenues of an expanding economy or by actually increasing the debt, despite the prosperous times.

Since John F. Kennedy was inaugurated as President, the U.S. government has run a budget surplus exactly once. During the first decade of total Keynesianism, the national debt increased by nearly a third (although it stayed nearly flat in constant dollars, thanks to the increasing inflation that marked the latter years of the decade). That was a greater increase than in any previous decade that did not involve a great war or depression. But because the sixties were also a decade of strong economic growth, the debt as a percentage of GNP continued to decline, although at a much slower pace than in the late forties and fifties. By 1970 the national debt was only 39.16 percent of GNP, lower than it had been since 1932.

Keynesians, of course, took credit for the strong economic growth in that decade and pointed to the falling ratio of debt to GNP as proof that debt didn’t matter to a sovereign power. Indeed, they talked about being able to “fine-tune” the American economy, mechanics tweaking it here and changing the air filter there to keep it running at peak efficiency.

In fact the Keynesian economic model, or more precisely all the Keynesian economic models, for they were many, were about to run off the road altogether in the high-inflation, high-unemployment economy of the 1970s. It was an economy that Keynesians thought to be impossible in the first place. Meanwhile, political events and new political conditions were beginning to interact in Washington, and the budget of the U.S. government, the largest fiscal entity on earth, was about to spin out of control.

“The Most Inefficient and Expensive Barnacle”

The Founding Fathers deliberately established an eternal power struggle between the President and Congress. They gave to Congress those decisions, such as how much spending to allow, that reflect the diverse interests of the people. Equally they gave the President the powers that are best exercised by a single individual, such as command of the military.

Over the years since Washington took office, power has flowed back and forth between the White House and Capitol Hill several times. In great crises, when a strong hand at the tiller was obviously needed, Presidents like Abraham Lincoln and Franklin Roosevelt were able to get pretty much what they wanted from Congress. So too could Presidents of extraordinary personality or political skills, such as Theodore Roosevelt and Lyndon Johnson. But when times were good or the White House was occupied by a weak President, like Ulysses S. Grant, Congress has tended to steadily encroach on the President’s freedom of action.

Nowhere have the power shifts between President and Congress been more noticeable in the twentieth century than in regard to spending. It was only in the aftermath of World War I that the federal government began for the first time to develop an actual budget to facilitate looking at the whole picture, not just the sum of all congressional appropriations. Until 1921 each executive department simply forwarded its spending requests to the Secretary of the Treasury, who passed them on in turn to the appropriate committee in the House. (The Constitution mandates that all revenue bills must originate in the House. By convention, spending bills originate there as well, giving the House the dominant congressional say in fiscal affairs.)

After the Civil War both houses of Congress had established appropriations committees to handle spending bills. Members who were not on these committees, however, envied the power of those who could dispense money—then as now the mother’s milk of politics—to favored groups. By the mid-1880s eight of the fourteen appropriations bills had been shifted to other committees. A former chairman of the House Appropriations Committee, Samuel Randall, predicted disaster. “If you undertake to divide all these appropriations and have many committees where there should be but one,” he wrote in 1884, “you will enter upon a path of extravagance you cannot foresee … until we find the Treasury of the country bankrupt.”

Time would prove Randall right, in fact more than once. By 1918 some departments had appropriations that were decided on by two or more committees, often working at cross-purposes. Many in Congress were disgusted with how such important matters were handled. “The President is asking our business men to economize and become more efficient,” Rep. Alvan T. Fuller declared in 1918, “while we continue to be the most inefficient and expensive barnacle that ever attached itself to the ship of state.”

In 1920 the House, by a bare majority, restored exclusive authority on spending bills to its Appropriations Committee, and the Senate followed suit two years later. But the House Appropriations Committee was considerably enlarged and split into numerous subcommittees that dealt with the separate spending bills. The committee as a whole usually had no practical choice but to go along with the subcommittees’ decisions. Power over individual appropriations therefore remained widely dispersed, while the ability to control and even determine total spending remained weak.

Meanwhile, in 1921 Congress passed the Budget and Accounting Act. This established the Bureau of the Budget, an arm of the Treasury Department, and the General Accounting Office, an arm of Congress empowered to audit the various executive departments and to make recommendations for doing things cheaper and better.

The executive departments now had to submit their spending requests to the Bureau of the Budget, which put together revenue estimates and a comprehensive federal spending plan before the requests were transmitted to Congress. By establishing the Bureau of the Budget, Congress gave the President dominating influence over overall spending. Because Congress lacked the bureaucratic machinery, it had no choice but to accept the President’s revenue estimates and could do little more than tinker with his spending proposals.

In 1939 Roosevelt, to tighten his grip on the budget even further, moved the bureau into the White House itself, where it would be under his immediate thumb. (In 1970 it became the Office of Management and Budget.) In 1946 Congress, wanting to increase its own power over the overall budget, passed the Legislative Reorganization Act. This required Congress to decide on a maximum amount to be appropriated each year before the actual appropriations bills were taken up. It was a dismal failure. In 1947 the Senate and House failed to agree on a spending limit. In 1948 Congress simply disregarded the limit and appropriated $6 billion more than the spending resolution had called for. In 1949 it failed to produce a resolution at all.

And Congress has often acted in ways that actually reduced its power to affect the budget as a whole, by increasing the amount of so-called backdoor spending. The members of the legislative committees still resented the power of the Appropriations Committee and its subcommittees, and in the late 1940s they began to redress the balance by writing spending into permanent law. Thus any changes in spending levels in the programs affected would have to pass through the committees that originated the laws in the first place.

 
Nixon impounded more and more money. An angry Congress reacted with the wildly misnamed Budget Control Act of 1974.

They did this by authorizing government agencies more and more to borrow on their own, to enter contracts, and to guarantee loans that then become obligations of the United States. Some quasi-governmental agencies such as the Postal Service were taken “off budget” and thus effectively removed from direct political control. But the most worrisome of this backdoor spending has been the “entitlements”—moneys paid without limit to all who qualify, in such programs as Social Security, food stamps, and Medicare. Today backdoor spending constitutes fully three-quarters of the entire budget but receives no direct congressional control whatever.

Congress’s failure to set total spending limits in the 1940s left the President still largely in charge of the budget for the next two decades, thanks to his ability to forecast revenues and shape the overall budget and, increasingly toward the end of the period, his power of “impoundment.” The Constitution is completely silent on whether the President is required to spend all the money that Congress appropriates. Certainly George Washington didn’t think so; he was the first to impound a congressional appropriation by simply refusing to spend it. Most Presidents, up to Richard Nixon, did likewise.

In 1950 Congress even indirectly acknowledged a limited impoundment power, by authorizing the President to take advantage of savings that were made possible by developments that occurred after an appropriation was made. But as the pressure on Congress to spend increased, and the old pay-as-you-go consensus began to fail, Presidents were forced to use the impoundment power more often and more aggressively in order to keep total spending in check.

In 1966 Lyndon Johnson used impoundment to cut a huge $5.3 billion chunk out of a $134 billion budget. His aim was to damp down the inflation that was largely caused by his guns-and-butter policy of fighting the Vietnam War at the same time he was increasing social spending at home. The impounded money included $1.1 billion in highway funds and $760 million in such popular areas as agriculture, housing, and education. The Democratic-controlled Congress, needless to say, was not happy about this. But since Johnson was both a Democratic President and perhaps the greatest political arm twister in the country’s history, he was able to enforce his way. In the following two years he impounded even larger sums.

His successor, Richard Nixon, did not fare so well. Nixon was, as he said, a Keynesian. But as a Keynesian he knew that in times of high inflation and low unemployment, such as he faced when he entered office, it was time to tighten, not increase, federal spending. Mostly by coincidence, in 1969, Nixon’s first year, the budget that was largely the work of the outgoing Johnson administration produced the last surplus the country has known.

Thereafter, congressional appropriations, despite the good times, continued to rise, and Nixon impounded more and more money. During the election of 1972 he called for a $250 billion spending ceiling for the next fiscal year, but the Senate rejected the request in October. Winning forty-nine states the following month, the reelected President decided to keep federal spending under that limit anyway, using the explicit power of the veto and the implicit one of impoundment.

Congress reacted angrily. Rep. Joe Evins, who was chairman of the Appropriations Subcommittee on Public Works—the very ladle of the political pork barrel—claimed that Nixon had impounded no less than $12 billion in appropriated funds. The Nixon administration responded that it was impounding only $8.7 billion, the smallest amount since 1966.

The Senate convened hearings on impoundment, chaired by Sen. Sam Ervin of North Carolina, the Senate’s leading authority on the Constitution, who thought that impoundment was flatly unconstitutional, being in effect a line-item veto. Both the House and the Senate produced bills that would have severely restricted or even eliminated the President’s impoundment authority. But no impoundment bill cleared Congress that session, and Washington was soon consumed with the Watergate scandal. As Nixon’s political leverage began to erode, Congress set out to make itself supersede the Presidency in the budgetary process. The result was the wildly misnamed Budget Control Act of 1974. Nixon signed it on July 12, not because he thought it was a good idea but because he knew any veto was futile. Less than a month later he resigned, leaving the Presidency weaker than it had been in the forty years since Franklin Roosevelt had been inaugurated.

The Madison Effect

The new Budget Control Act created the Congressional Budget Office to give Congress much the same expertise as the President enjoyed from the Office of Management and Budget and, of course, duplicating most of its work. Further, it forbade impoundment, substituting two new mechanisms, recision and deferral. The first allowed the President to request that Congress remove spending items from appropriations. But unless both houses agreed, the money had to be spent. Needless to say, recision has proved useless as a means of budgetary discipline. The second mechanism, deferral, was ruled unconstitutional.

But with the Presidency already severely weakened by the folly of its most recent occupant, Congress, in writing the Budget Control Act, was much more concerned about the distribution of power within Congress itself. The original proposal of the joint committee that had been established to review budget procedures called for ceilings to be established early in the year. These, of course, would have restricted the ability of Congress to begin new programs or enlarge old ones without taking the money from somewhere else, so flexible “targets” were substituted for rigid ceilings.

The result was that there was now little to offset Congress’s natural inclination to spend, either in Congress or in the Presidency. Further, this inclination had been, if anything, only increased by a revolution in the House of Representatives that resulted in the overthrow of the seniority system.

Under the seniority system the senior member of a committee in the majority party was automatically chair of that committee. This setup had been arranged in the early days of the century as a check on the then unbridled power of the Speaker. But the many freshman representatives who entered the House in 1975, in the wake of the Watergate scandal, were typically young, liberal, and not eager to wait years before achieving real power in the House.

They forced a change in the rules so that the majority-party caucus (all members of that party meeting together) elected the committee chairs at the beginning of each new Congress. In practice this meant the Democratic caucus until this year, since the Democratic party had had a majority in the House from 1954.

In theory this made the House much more democratic. In fact it removed nearly the last check on spending. Under the seniority system the committee chairs, safe both in their seats and in their chairmanships, could look at the larger picture—the national interest—as well as their own political interests. Under the new system, however, they had to secure the support of a majority of the caucus every two years to keep their chairmanships. That, of course, meant they had to make promises—and promises, in Congress, almost invariably mean spending. Further, the spread of television as the dominant medium for political campaigns, and the political-action-committee system for funding those campaigns, made the members of Congress increasingly independent of their home base and grass-roots support and ever more dependent on the spending constituencies that ran the PACs.

The result was an explosion of deficit spending, because there was no one in Washington with the power or the inclination to stop it. In nominal terms, the national debt more than doubled in the 1970s, from $382 billon to $908 billion. In constant dollars, despite the galloping inflation of that decade, it rose more than 12 percent. And while as a percentage of GNP it had been falling every year since the end of World War II, in the 1970s it stayed nearly constant by that measure.

The only thing that kept federal deficits from getting a great deal worse than they did was the very high inflation the nation experienced in the late 1970s. The inflation caused nominal wages to rise sharply, while real wages declined. Regardless, the ever-higher nominal wages pushed people into higher and higher tax brackets. It would seem that it would have been a politician’s dream come true: a mammoth and continuing tax increase on real wages that didn’t have to be voted on.

But, of course, Lincoln was right, and it is not possible to fool all of the people all of the time. When Ronald Reagan ran for President in 1980 on an antitax, antigovernment platform, he swept out of office an elected President for the first time in forty-eight years and the Democratic majority in the Senate, for the first time in twenty-six. But while Reagan was able to push through both tax reduction and reforms—indexing brackets, for instance, so that inflation no longer automatically raised taxes—he achieved real spending limitations only in the first year of his Presidency. Thereafter his budgets were declared “dead on arrival” as soon as they reached Capitol Hill.

And Congress provided no coherent substitute. Indeed, more than once Congress was unable to enact a single appropriation bill before the start of the fiscal year, October 1. To avoid shutting down the government, it had to pass so-called continuing resolutions that allowed federal departments to continue spending at current levels.

President Reagan was determined to fund the Star Wars project he initiated and to continue the buildup of the military that had begun in the Carter years. He was able to get these expensive programs through Congress, and they finally helped bring victory in the Cold War. But Congress was unwilling to cut spending elsewhere, while the cost of the now-myriad entitlement programs ratcheted upward in real terms year by year.

So federal spending continued to rise without relation to revenues. The result, coupled with the huge bailout required by the savings and loan debacle, was an avalanche of deficits. In the 1980s debt again more than tripled in nominal dollars, as it had in the 1970s. But this time inflation did not cushion the blow nearly so much, and debt more than doubled in real terms. As a percentage of GNP the national debt increased from 34 percent to 58 percent, the highest it had been in three decades.

Numerous “summits” and “budget deals” between the President and Congress were held in the 1980s and 1990s, and numerous “reforms” were agreed upon. But none of them addressed the root of the problem. Indeed, 1985 was the year the budget deficit became a major political issue and the first of the laws meant to bring spending under control, known as Gramm-Rudman, was enacted. But that year Congress also initiated no fewer than 54 new government benefit programs, bringing the total number to 1,013.

 
Budget “reforms” of the 1980s and 1990s amounted to business as usual today with spending cuts promised for tomorrow.

Stripped of rhetoric, the attempts to rein in spending amounted to little more than business as usual today with spending cuts promised for tomorrow. None of them produced any lasting reversal of the trend of higher and higher deficits. In the first three years of the 1990s the debt-to-GNP ratio rose another 10 percentage points, to more than 68 percent.

The reason was simple enough. The self-interest of members of Congress in getting re-elected had become intimately intertwined with more and more spending—the quid pro quo of PAC contributions—at the price of prudence and the national interest. The utter congressional domination of the budget process therefore ensured that spending would only increase.

In 1992, with the people clearly unhappy with how the country’s affairs were being handled, Bill Clinton ran for President on a platform of “fundamental change.” A minority of a deeply divided electorate chose him and his platform, rejecting an elected President for only the second time since Hoover lost to Roosevelt sixty years earlier and giving a third-party candidate a higher percentage of the vote than any third-party candidate since Theodore Roosevelt ran as a Progressive in 1912.

But an ossified congressional majority, while paying lip service to restraint, in fact resisted any change in the status quo of how Congress worked, because it would have meant a change in their power. The Madison Effect held them in its grip. The very day after the 1992 election, the congressional leadership flew to Little Rock and advised President-elect Clinton to downplay the congressional and structural reforms that were part of his program, in order to get the rest enacted.

Clinton, in what turned out to be one of the biggest political misjudgments of the twentieth century, agreed. It was to be business as usual in Washington for two more years. But only two years, it turned out. Still another “budget deal” with Congress to curb the federal government’s spending addiction was worked out in 1993, but it was a near carbon copy of the 1990 budget deal that had been an unmitigated failure. This time the Republicans would have none of it, and it passed with no GOP votes whatever. Indeed, the recent decline in the size of the federal deficit, widely touted as the result of the newest budget deal, has been in fact largely due to the sale of assets taken over from failed S&Ls. Even the Clinton administration predicted that the deficit would begin rising again soon.

The people reacted unequivocally at the next opportunity, and the 1994 congressional election was a political earthquake of the first order, one whose aftershocks will rumble through Washington far into the future. The Democratic party lost its majority in the House for the first time in forty years. The Speaker of the House lost his seat, the first time that had happened since the Civil War, and many other “old bulls” went with him. The Senate as well went Republican, along with many state governorships and legislatures.

As soon as the new Congress convened on January 3, 1995, it began to change the system, beginning with extensive reforms of procedures in the House.

It is far too soon to know how profound the revolution begun last January will prove to be. After all, those who have to change the status quo—the members of Congress—are the prime beneficiaries, at least in the short term, of that status quo. The Madison Effect ensures their reluctance to transfer some power over the budget to the President or to end the inherently corrupt system of funding presidential campaigns. But we do know that human nature cannot change; it can only be taken into account as our understanding of it deepens. The Founding Fathers, the greatest practitioners of applied political science the world has known, realized intuitively that the self-interest of politicians must be made, by law, to lie in the same direction as the national interest if the government is to work in the interests of the people for long. If, in searching for the answers to the political problems of the late twentieth century, we have some measure of the same political wisdom, all will be well.

We hope you enjoy our work.

Please support this 72-year tradition of trusted historical writing and the volunteers that sustain it with a donation to American Heritage.

Donate