Fritz Lang’s silent movie classic Metropolis (1927) depicts the downfall of a hierarchical megacity. Metropolis is a city of skyscrapers. At the top, in their penthouse C-suites, lives a wealthy elite led by the autocrat Joh Fredersen. Down below, in subterranean factories, the proletariat toils. After he witnesses an industrial accident, Fredersen’s playboy son is awakened to the squalor and danger of working-class life. The upshot is a violent revolution and a self-inflicted if inadvertent disaster: When the workers smash the power generators, their own living quarters are flooded because the water pumps fail. Today, Metropolis is perhaps best remembered for the iconic female robot that becomes the doppelgänger of the heroine, Maria. Yet it is better understood as a metaphor for history’s fundamental dialectic between hierarchies and networks.
Lang said the film was inspired by his first visit to New York. To his eyes, the skyscrapers of Manhattan were the perfect architectural expression of a hierarchical and unequal society. Contemporaries, notably the right-wing media magnate Alfred Hugenberg, detected a communist subtext, though Lang’s wife, who co-wrote the screenplay, was a radical German nationalist who later joined the Nazi Party. Viewed today, the film transcends the political ideologies of the mid-20th century. With its multiple religious allusions, culminating in an act of redemption, Metropolis is modernity mythologized. The central question it poses is as relevant today as it was then: How can an urbanized, technologically advanced society avoid disaster when its social consequences are profoundly anti-egalitarian?
There is, perhaps, an even more profound question in the subtext of Lang’s film: Who wins, the hierarchy or the network? The greatest threat to the hierarchical social order of Metropolis is posed not by flooding but by a clandestine conspiracy among the workers. Nothing infuriates Fredersen more than the realization that this conspiracy was hatched in the catacombs beneath the city without his knowledge.
In today’s terms, the hierarchy is not a single city but the state itself, the vertically structured super-polity that evolved out of the republics and monarchies of early modern Europe. Though not the most populous nation in the world, the United States is certainly the world’s most powerful state, despite the limits imposed by checks (to lobbyists) and balances (as in bank). Its nearest rival, the People’s Republic of China, is usually seen as a profoundly different kind of state, for while the United States has two major parties and a gaggle of tiny ones, the People’s Republic has one and only one. American government is founded on the separation of powers, not least the independence of its judiciary; the PRC subordinates law, such as it has evolved in China over the centuries, to the dictates of the Communist Party.
Yet both states are republics, with roughly comparable vertical structures of administration and not wholly dissimilar concentrations of power in the hands of the central government. Economically, the two systems are certainly converging, with China looking ever more to market signals and incentives, while the United States keeps increasing the statutory and regulatory power of government over producers and consumers. And, to an extent that disturbs civil libertarians on both Left and Right, the U.S. government exerts control and practices surveillance over its citizens in ways that are functionally closer to contemporary China than to the America of the Founding Fathers.
To all the world’s states, democratic and undemocratic alike, the new informational, commercial, and social networks of the internet age pose a profound challenge, the scale of which is only gradually becoming apparent. First email achieved a dramatic improvement in the ability of ordinary citizens to communicate with one another. Then the internet came to have an even greater impact on the ability of citizens to access information. The emergence of search engines marked a quantum leap in this process. The advent of laptops, smartphones, and other portable devices then emancipated electronic communication from the desktop. With the explosive growth of social networks came another great leap, this time in the ability of citizens to share information and ideas.
It was not immediately obvious how big a challenge all this posed to the established state. There was a great deal of cheerful talk about the ways in which the information technology revolution would promote “smart” or “joined-up” government, enhancing the state’s ability to interact with citizens. However, the efforts of Anonymous, Wikileaks and Edward Snowden to disrupt the system of official secrecy, directed mainly against the U.S. government, have changed everything. In particular, Snowden’s revelations have exposed the extent to which Washington was seeking to establish a parasitical relationship with the key firms that operate the various electronic networks, acquiring not only metadata but sometimes also the actual content of vast numbers of phone calls and messages. Techniques of big-data mining, developed initially for commercial purposes, have been adapted to the needs of the National Security Agency.
The most recent, and perhaps most important, network challenge to hierarchy comes with the advent of virtual currencies and payment systems like Bitcoin. Since ancient times, states have reaped considerable benefits from monopolizing or at least regulating the money created within their borders. It remains to be seen how big a challenge Bitcoin poses to the system of national fiat currencies that has evolved since the 1970s and, in particular, how big a challenge it poses to the “exorbitant privilege” enjoyed by the United States as the issuer of the world’s dominant reserve (and transaction) currency. But it would be unwise to assume, as some do, that it poses no challenge at all.
Clashes between hierarchies and networks are not new in history; on the contrary, there is a sense in which they are history. Indeed, the course of history can be thought of as the net result of human interactions along four axes.
The first of these is time. The arrow of time can move in only one direction, even if we have become increasingly sophisticated in our conceptualization and measurement of its flight. The second is nature: Nature means in this context the material or environmental constraints over which we still have little control, notably the laws of physics, the geography and geology of the planet, its climate and weather, the incidence of disease, our own evolution as a species, our fertility, and the bell curves of our abilities as individuals in a series of normal distributions. The third is networks. Networks are the spontaneously self-organizing, horizontal structures we form, beginning with knowledge and the various “memes” and representations we use to communicate it. These include the patterns of migration and miscegenation that have distributed our species and its DNA across the world’s surface; the markets through which we exchange goods and services; the clubs we form, as well as the myriad cults, movements, and crazes we periodically produce with minimal premeditation and leadership. And the fourth is hierarchies, vertical organizations characterized by centralized and top-down command, control, and communication. These begin with family-based clans and tribes, out of which or against which more complex hierarchical institutions evolved. They include, too, tightly regulated urban polities reliant on commerce or bigger, mostly monarchical, states based on agriculture; the centrally run cults often referred to as churches; the armies and bureaucracies within states; the autonomous corporations that, from the early modern period, sought to exploit economies of scope and scale by internalizing certain market transactions; academic corporations like universities; political parties; and the supersized transnational states that used to be called empires.
Note that the environment is not wholly a given; it can be shaped by, as well as shape, humanity. It may well be that, in the foreseeable future, our species’ impact on the earth’s climate will become the dominant driver of history, but that is not yet the case. For now, the interactions of networks and hierarchies are more important. Networks are not planned by a single authority; they are the main source of innovation but are relatively fragile. Hierarchies exist primarily because of economies of scale and scope, beginning with the imperative of self-defense. To that end, but for other reasons too, hierarchies seek to exploit the positive externalities of networks. States need networks, for no political hierarchy, no matter how powerful, can plan all the clever things that networks spontaneously generate. But if the hierarchy comes to control the networks so much as to compromise their benign self-organizing capacities, then innovation is bound to wane.
Consider some examples of history along these four axes. The population of the entire Eurasian landmass was devastated by the Black Death of the 14th century, a natural disaster transmitted along trade networks. But the impact was very different in Europe compared with Asia. The main difference between the West and the East of Eurasia after 1500 was that networks in the West were much freer from hierarchical dominance than in the East. No monolithic empire rose in the West; multiple and often weak principalities prevailed. Printing existed in China long before the 15th century, but its advent in Germany was explosive because of the network effects generated by the rapid spread of Gutenberg’s easily replicated technology. The Reformation, which was printed as much as it was preached, unleashed a wave of religious revolt against the hierarchy of the Roman Catholic Church. It was only after prolonged and bloody conflict that the monarchies were able to re-impose their hierarchical control over the new Protestant sects.
European history in the 17th, 18th, and 19th centuries was characterized by a succession of network-driven waves of innovation: the Scientific Revolution, the Enlightenment, and the Industrial Revolution. In each case, the sharing of novel ideas within networks of scholars and tinkerers produced powerful and mainly positive externalities, culminating in the decisive improvements in economic efficiency and then life expectancy experienced in the British Isles, Western Europe, and North America from the late 18th century. The network effects of trade and migration were especially powerful, as European merchants and settlers exploited falling transportation costs to export their ideas, as well as their techniques and goods, to the rest of the world. Thanks to those ideas, this was also an era of political revolutions. Ideas about liberty, equality, and fraternity crossed the Atlantic as rapidly as pirated technology from the cotton mills of Lancashire. Kings were toppled, aristocracies abolished, and churches dissolved or made to compete without the support of a state.
Yet the 19th century saw the triumph of hierarchies over the new networks. This was partly because hierarchical corporations—which began, let us remember, as state-sponsored monopolies like the East India Company—were as important in the spread of industrial capitalism as horizontally structured markets. Firms could reduce the transaction costs of the market as well as exploit economies of scale and scope. The railways, steamships, and telegraph cables that made possible the first age of globalization had owners.
The key, however, was the victory of hierarchy in the realm of politics. Why revolutionary ideologies like Jacobinism and Marxism-Leninism so quickly produced highly centralized hierarchical political structures is one of the central puzzles of the modern era, though it was an outcome more or less accurately predicted by much classical political theory. Whatever the democratic aspirations of the revolutionaries, their ideologies ended up as sources of legitimation for autocrats who were markedly more power-hungry than the monarchs of the ancien régime.
True, the energies unleashed by the overthrow of the Bourbons were (just barely) insufficient to overcome those produced by the British synthesis of monarchism and the pursuit of Mammon, which restored or revived the continental monarchies, including, temporarily, the Bourbons themselves. But the old order was only partially restored. Napoleon had taught even his most ardent enemies an unforgettable lesson, as Clausewitz understood, about how an imperial leader could wield power by commanding a people in arms.
For a time it seemed that a modus vivendi had arisen between the new networks of science and industry and the old hierarchies of hereditary rule. Half the world fell under the sway of a dozen Western empires, and much of the rest was under their economic sway. But optimists, from Norman Angell to Andrew Carnegie, felt sure that these empires would not be so foolish as to jeopardize the benefits of international exchange. After all, it was partly by taxing the fruits of the first era of globalization that the empires could finance their vast armies, navies, and bureaucracies. This proved wrong. So complete was the imperial system of command, control, and communication that when the empires resolved to go to war with one another over arcane issues like the status of Bosnia-Herzegovina or the neutrality of Belgium, they were able to mobilize in excess of seventy million men as soldiers or sailors. In France and Germany about a fifth of the prewar population ended up in uniform, bearing arms.
The triumph of hierarchy over networks was symbolized by the complete failure of the Second International of socialist parties to prevent the World War. When the leaders of European socialism met in Brussels at the end of July 1914, they could do little more than admit their own impotence. What the Viennese satirist Karl Kraus called the alliance of “thrones and telephones” had marched the young men of Europe off to Armageddon. Those who thought the war would not last long underestimated the hierarchical state’s ability to sustain industrialized slaughter.
The mid 20th century was the zenith of hierarchy. Although World War I ended with the collapse of no fewer than four of the great dynastic empires—the Romanov, Habsburg, Hohenzollern, and Ottoman—they were replaced with astonishing swiftness by new and stronger states based on the normative paradigm of the nation-state, the ethno-linguistically defined anti-imperium.
Not only did the period after 1918 witness the rise of the most centrally controlled states of all time (Stalin’s Soviet Union, Hitler’s Third Reich and Mao’s People’s Republic); it was also an era in which hierarchies flourished in the economic, social and cultural spheres. Central planners ruled, whether they worked for governments, armies or large corporations. In Aldous Huxley’s Brave New World (1932), the Fordist World State controls everything from eugenics to narcotics and euthanasia; the fate of the non-conformist Bernard Marx is banishment. In Orwell’s Nineteen Eighty-Four (1949) there is not the slightest chance that Winston Smith will be able to challenge Big Brother’s rule over Airstrip One; his fate is to be tortured and brainwashed. A remarkable number of the literary heroes of the high Cold War era were crushed by one system or the other: from Heller’s John Yossarian to le Carré’s Alec Leamas to Solzhenytsin’s Ivan Denisovich.
Kraus was right: The information technology of mid-century overwhelmingly favored the hierarchies. Though the telegraph and telephone created vast new networks, they were relatively easy to cut, tap, or control. Newsprint, radio, cinema, and television were not true network technologies because they generally involved one-way communication from the content provider to the reader or viewer. During the Cold War the superpowers were mostly able to control information flows by manufacturing or sponsoring propaganda and classifying or censoring anything deemed harmful. Sensation surrounded every spy scandal and defection; yet in most cases all that happened was that classified information was passed from one national security state to the other. Only highly trained personnel in governmental, academic, or corporate research centers used computers, and those were anything but personal computers. The self-confidence of the technocrats at that time is nicely exemplified by MONIAC (the Monetary National Income Analogue Computer), a hydraulic device designed by Bill Phillips (of Phillips Curve fame) that was supposed to simulate the effects of Keynesian economic policy on the UK economy.
There were moments of truth, particularly in the 1970s, when classified information reached the public through the free press in the West or through samizdat literature in the Soviet bloc. Yet the striking feature of the later Cold War was how well the national security state managed to withstand exposures like the report of the Church Committee or the publication of the Gulag Archipelago. George H.W. Bush, appointed head of the Central Intelligence Agency in 1976—in the midst of the Church Committee’s work—went on to serve as Vice President and President. Within a decade of the collapse of the Soviet Union, the Russian Federation had a former KGB operative as its President. The Pentagon proved to be mightier than the Pentagon Papers.
Today, by contrast, the hierarchies seem to be in much more trouble. The most obvious challenge to established hierarchies is the flow of information unleashed by the advent of the personal computer, email, and the internet, which have allowed ordinary citizens to organize themselves into much larger and more dispersed networks than has ever been possible before. The PC has empowered the individual the way the book did after the 15th-century breakthrough in printing. Indeed, the trajectories for the production and price of PCs in the United States between 1977 and 2004 are remarkably similar to the trajectories for the production and price of printed books in England from 1490 to 1630. The differences are that our networking revolution is much faster and that it is global.
In a far shorter space of time than it took for 84 percent of the world’s adults to become literate, a remarkably large proportion of humanity has gained access to the internet. Although its origins can be traced back to the late 1960s, the internet as a system of interconnected computer networks did not really begin until the standard protocol suite (TCP/IP) was adopted at universities in the 1980s. As recently as 1998 only around 2 percent of the world’s population were internet users. Today the proportion is 39 percent; in the developed world, 77 percent.
Google was incorporated in 1998. Its first premises were a garage in Menlo Park. Today its has the capacity to process more than a billion search requests and 24 petabytes of user-generated data every day. Facebook was founded at Harvard ten years ago. Today it has 1.23 billion regular users a month. Twitter was created eight years ago. Now it has 200 million users, who send more than 400 million tweets daily.
The challenge these new networks pose to established hierarchies is threefold. First, they vastly increase the volume of information to which citizens can have access, as well as the speed with which they can have access to it. Second, they empower individual citizens to publicize things that might otherwise remain secret or known only to a few. Edward Snowden and Daniel Ellsberg did the same thing by making public classified documents, but Snowden has already revealed much more than Ellsberg and to vastly more people, while Julian Assange, the founder of WikiLeaks, has far out-scooped Carl Bernstein and Bob Woodward (even if he has not yet helped to bring down an American President). Third, and perhaps most importantly, the networks expose by their very performance the inefficiency of hierarchical government.
Politicians and voters remain the captives of a postwar campaign vocabulary in which the former pledge to the latter that they will provide not just additional public goods but also “create jobs” without significantly increasing the cost to most voters in terms of taxation. The history of President Barack Obama’s Administration can be told as a series of pledges to increase employment (“the stimulus”), reduce the risk of financial crisis, and provide universal health insurance. The President’s popularity has declined fastest when, as with the Patient Protection and Affordable Care Act, the inability of the Federal government to fulfill these pledges efficiently has been most exposed. The shortcomings of the website Healthcare.gov in many ways epitomized the fundamental problem: In the age of Amazon, consumers expect basic functionality from websites. Daily Show host Jon Stewart spoke for hundreds of thousands of frustrated users when he taunted former Health and Human Services head Kathleen Sebelius: “I’m going to try and download every movie ever made, and you’re going to try to sign up for Obamacare, and we’ll see which happens first.”
Yet the trials and tribulations of “Obamacare” are merely a microcosm for a much more profound problem. The modern state, at least in its democratic variant, has evolved a familiar solution to the problem of increasing the provision of public goods without making proportionate increases to taxation, and that is to finance current government consumption through borrowing, while at the same time encouraging citizens to increase their own leverage by various fiscal incentives, such as the deductibility of mortgage interest payments. The vast increase of private debt that preceded the financial crisis of 2008 was succeeded by a comparably vast increase in public debt. At the same time, central banks took increasingly unorthodox steps to shore up tottering banks and plunging asset markets by purchases of securities in exchange for excess reserves. With short-term interest rates at zero, “quantitative easing” was designed to keep long-term interest rates low too. The financial world watches with bated breath to see how QE can be “tapered” and when short-term rates will be raised. Most economists nevertheless take for granted the U.S. government’s ability to print its own currency without limit. Many assume that this offers some relatively easy way out of trouble if rising interest rates threaten to make debt service intolerably burdensome. But this assumption may be wrong.
Since ancient times, states have exploited their ability to issue currency, whether coins stamped with the king’s likeness or electronic dollars on a screen. But if the new networks are in the process of creating an alternative form of money, such as Bitcoin purports to be, then perhaps the time-honored state privilege to debase the currency is at risk. Bitcoin offers many advantages over a fiat currency like the U.S. dollar. As a means of payment—especially for online transactions—it is faster, cheaper, and more secure than a credit card. As a store of value it has many of the key attributes of gold, notably finite supply. As a unit of account it is having teething troubles, but that is because it has become an attractive speculative object. It is too early to predict that Bitcoin will succeed as a parallel currency, but it is also too early to predict that it will fail. In any case, governments can fail, too.
Where governments fail most egregiously, new networks may well increase the probability of successful revolution. The revolutionary events that swept the Middle East and North Africa beginning in Tunisia in December 2010—the so-called Arab Spring—were certainly facilitated by various kinds of information technology, even if for most Arabs it was probably the television channel Al Jazeera more than Facebook or Twitter that spread the news of the revolution. Most recently, the revolutionaries in Kiev who overthrew Ukrainian President Viktor Yanukovych made effective use of social networks to organize their protests in the Maidan and to disseminate their critique of Yanukovych and his cronies.
Yet it would be naive to assume that we are witnessing the dawn of a new era of free and equal netizens, all empowered by technology to speak truth to (and about) power, just as it would be naive to assume that the hierarchical state is doomed, if not to revolutionary downfall then at least to a permanent diminution of its capacity for social control.
Modern networks have prospered, paradoxically, in ways that are profoundly inegalitarian. That is because ownership of the information infrastructure and the rents from it are so concentrated. Google at the time of writing is worth $359 billion by market capitalization. About 16 percent of its shares, worth $58 billion, are owned by its founders, Larry Page and Sergey Brin. The market capitalization of Facebook is $161 billion; 28 percent of the shares, worth $45 billion, are owned by its founder Mark Zuckerberg. If Thomas Piketty needs further proof of his thesis that the world is reverting to the inequality of a century ago because, absent world wars and revolutions, the rate of return on capital (and the rate of growth of executive compensation) tends to outstrip the rate of growth of aggregate income, it is there in abundance in Silicon Valley. Granted, the young and very wealthy people who literally own the modern networks tend to have somewhat liberal political views. A few of them are libertarians. But few of them would welcome Gallic rates of taxation, much less a French-style egalitarian revolution.
At the same time, the hierarchical has not been slow to appreciate the opportunities that the new social networks present. Edward Snowden’s most startling revelation was the complicity of companies like Google, Apple, Yahoo, and Facebook in the National Security Agency’s global surveillance programs, notably PRISM. It is all very well for Mark Zuckerberg to complain that he has been “so confused and frustrated by the repeated reports of the behavior of the U.S. government” and to declare self-righteously: “When our engineers work tirelessly to improve security, we imagine we’re protecting you against criminals, not our own government.” But he knows full well that since at least 2009 Facebook has responded to tens of thousands of U.S. government requests for information about Facebook users. If not for Snowden’s leaks, we would not have known just how freely the NSA was making use of the provisions of the Foreign Intelligence Surveillance Act.
The owners of the networks are also well aware that plotting jihad is not the principal use to which their technology is put, any more than plotting revolution is. They owe their security much more to network surfers’ apathy than to the NSA. Most people do not go online to participate in flash mobs. Most women seem to prefer shopping and gossiping; most men prefer sports and pornography. All those neural quirks produced by evolution make us complete suckers for the cascading stimuli of tweets, Instagrams, and Facebook pokes from members of our electronic kinship group. The networks cater to our solipsism (selfies), our short attention spans (140 characters), and our seemingly insatiable appetite for “news” about “celebrities.”
In the networked world, the danger is not popular insurrection but indifference; the political challenge is not to withstand popular anger but to transmit any kind of signal through the noise. What can focus us, albeit briefly, on the tiresome business of how we are governed or, at least, by whom? When we speak of “populism” today, we mean simply a politics that is audible as well as intelligible to the man in the street. Not that the man in the street is actually in the street. Far more likely, he is the man slumped on his sofa, his attention skipping fitfully from television to laptop to tablet to smartphone and back to television. And what gets his attention? The end of history? The clash of civilizations? The answer turns out to be the narcissism of small differences.
Liberals denounce conservatives with astonishing vituperation; Republicans inveigh against Democrats. But to the rest of the world what is striking are the strange things nearly all Americans agree about (for example, that children should be packed off to camps in the summer). Many English people are outraged about immigrant Romanians. But to East Asian eyes the English are scarcely distinguishable from Romanians. (Indeed, in many parts of formerly working-class England people live much as the reviled Roma are alleged to: in squalor.)
It is no accident that most of the world’s conflicts today are not between civilizations, as Samuel Huntington foresaw, but between neighbors. That, after all, is what is really going on in Syria, Iraq, and the Central African Republic, not to mention Ukraine. Can anyone other than a Russian or a Ukrainian tell a Russian and a Ukrainian apart? And yet how readily one is pitted against the other, and how distractingly.
At times, it can seem as if we are condemned to try to understand our own time with conceptual frameworks more than half a century old. Since the financial crisis that began in 2007, many economists have been reduced to recycling the ideas of John Maynard Keynes, who died in 1946. At the same time, analysts of international relations seem to be stuck with terminology that dates from roughly the same period: “realism” or “idealism”, containment or appeasement. (George Kennan’s “Long Telegram” was dispatched just two months before Keynes’s death.)
Yet our own time is profoundly different from the mid-20th century. The near-autarkic, commanding and controlling states that emerged from the Depression, World War II, and the early Cold War exist only as pale shadows of their former selves. Today, the combination of technological innovation and international economic integration has created entirely new forms of organization—vast, privately owned networks—that were scarcely dreamt of by Keynes and Kennan. We must ask ourselves: Are these new networks really emancipating us from the tyranny of the hierarchical empire-states? Or will the hierarchies ultimately take over the networks as they did a century ago, in 1914, successfully subordinating them to the priorities of the national security state?
A libertarian utopia of free and equal netizens—all networked together, sharing all available data with maximum transparency and minimal privacy settings—has a certain appeal, especially to the young. It is romantic to picture these netizens, like the workers in Lang’s Metropolis, spontaneously rising up against the world’s corrupt hierarchies. Yet the suspicion cannot be dismissed that, despite all the hype of the Information Age and all the brouhaha about Messrs. Snowden and Assange, the old hierarchies and new networks are in the process of reaching a quiet accommodation with one another, much as thrones and telephones did a century ago. We shall all know what it means when (as begins to be imaginable) Sheryl Sandberg leans all the way into the White House. It will mean that Metropolis lives on.
In my previous two articles, I have shown that Paul Krugman - revered by his acolytes as the Invincible Krugtron - failed to anticipate the financial crisis and wrongly predicted that the single European currency would fall victim to it. I have exploded his claim to intellectual invincibility. Very clearly, he has made at least twice as many major mistakes in his career as the mere two he has previously admitted to.
You may ask: Why have I taken the trouble to do this? I have three motives. The first is to illuminate the way the world really works, as opposed to the way Krugman and his beloved New Keynesian macroeconomic models say it works. The second is to assert the importance of humility and civility in public as well as academic discourse. And the third, frankly, is to teach him the meaning of the old Scottish regimental motto: nemo me impune lacessit ("No one attacks me with impunity").
I am not an economist. I am an economic historian. The economist seeks to simplify the world into mathematical models - in Krugman's case models erected upon the intellectual foundations laid by John Maynard Keynes. But to the historian, who is trained to study the world "as it actually is", the economist's model, with its smooth curves on two axes, looks like an oversimplification. The historian's world is a complex system, full of non-linear relationships, feedback loops and tipping points. There is more chaos than simple causation. There is more uncertainty than calculable risk. For that reason, there is simply no way that anyone - even Paul Krugman - can consistently make accurate predictions about the future. There is, indeed, no such thing as the future, just plausible futures, to which we can only attach rough probabilities. This is a caveat I would like ideally to attach to all forward-looking conjectural statements that I make. It is the reason I do not expect always to be right. Indeed, I expect often to be wrong. Success is about having the judgment and luck to be right more often than you are wrong.
On both Europe and the approach of the financial crisis, I would say that - unlike Paul Krugman - I was right more often than I was wrong. But so what? When investors and fund managers are right more often than they are wrong, they are rewarded - handsomely. When they are wrong more often than they are right, they lose money or clients, usually both. The world of public intellectuals is different. Using their academic credibility to pontificate about the future, professor-pundits can be wrong again and again without losing money or their tenured jobs. Many distinguished and lucrative careers have been based on just such a pattern of unpunished error. By the same token, the returns on being right are surprisingly low. A book sells because its prediction fits the mood of the moment. The author may get a bonus - in the form of additional sales - if he turns out to be right. But he doesn't have to return the royalty checks if he turns out to be dead wrong.
So we public intellectuals should not brag too loudly when we get things right. Nor should we condemn too harshly the predictions of others that are subsequently falsified by events. The most that we can do in this unpredictable world is read as widely and deeply as we can, think seriously, and then exchange ideas in a humble and respectful manner. Nobody ever seems to have explained this to Paul Krugman. There is a reason that his hero John Maynard Keynes did not go around calling his great rival Friedrich Hayek a "mendacious idiot" or a "dope".
For too long, Paul Krugman has exploited his authority as an award-winning economist and his power as a New York Times columnist to heap opprobrium on anyone who ventures to disagree with him. Along the way, he has acquired a claque of like-minded bloggers who play a sinister game of tag with him, endorsing his attacks and adding vitriol of their own. I would like to name and shame in this context Dean Baker, Josh Barro, Brad DeLong, Matthew O'Brien, Noah Smith, Matthew Yglesias and Justin Wolfers. Krugman and his acolytes evidently relish the viciousness of their attacks, priding themselves on the crassness of their language. But I should like to know what qualifies a figure like Matt O'Brien to call anyone a "disingenuous idiot"? What exactly are his credentials? 35,550 tweets? How does he essentially differ from the cranks who, before the Internet, had to vent their spleen by writing letters in green ink?
To be frank, I probably would not have bothered to write all this if I myself had not been one of the targets of Krugman's crude invective. The "Always-Wrong Club" is just the latest of many ad hominem attacks he has made on me since 2009. On one occasion he implied that I was a racist and then called me a "whiner" when I objected. On another he referred to me as a "poseur", adding for good measure that I had "choked on [my] own snark". Last year he wildly accused of making "multiple errors and misrepresentations" in article for Newsweek, only one of which he ever specified. More recently I was accused of "trying to flush [my] own past statements down the memory hole" - a characteristically crude turn of phrase - and of being "inane". Re-reading these, I can only marvel at the man's hypocrisy, for Krugman often sanctimoniously denies that he "does ad hominem" - and once had the gall to accuse Joe Scarborough of making such an attack on him when Scarborough merely quoted Krugman's own words back at him. For the record here is his own definition: "ad hominem attacks involve attacking the person in general rather than what the person has to say on a specific issue".
The start of Krugman's vendetta was a public debate we had in New York in April 2009, in which we disagreed about U.S. fiscal policy in the wake of the financial crisis. Krugman has repeatedly misrepresented what I said in that debate. Immediately afterwards, he cynically claimed on his blog that I had been arguing that high deficits would crowd out private spending. Later, in order to have a straw man for his vulgar Keynesian claim that even larger deficits would have produced a faster recovery, he started to pretend that I had predicted "soaring interest rates" and had called for immediate austerity. For example:
Niall Ferguson ... said that government borrowing would bring on the bond vigilantes and send rates soaring. ... [His] vision led to calls for austerity now now now; mine said that the overwhelming danger was that we wouldn't provide enough stimulus, and that we would pull back too soon. Sure enough, we didn't and we did. And now catastrophe looms.
But anyone who reads the transcript of our debate - even the edited version that was published - can see that this was not my position. What I said was that there was a contradiction between fiscal and monetary stimulus. My point was that, with debt soaring and growth slumping, the Fed would have to buy much larger amounts of U.S. Treasuries than anyone then anticipated. I was right in that debate to say that the Obama administration's growth forecasts were wildly over-optimistic and that therefore the federal debt would rise "in the next five or ten years to around 100 per cent of GDP" (according to the IMF, the gross federal debt passed that mark last year). I was also right that the Fed's balance sheet would "explode to up to $3 or $4 trillion". The second phase of what then became known as "quantitative easing", instituted in November 2010, was explicitly designed to keep long term interest rates artificially low. It was followed in 2012 by QE3. The Fed's balance sheet currently stands at $3.7 trillion.
On that occasion, Krugman in fact agreed with me that the "long-run solvency of the US government" was something to worry about. The ability to manage a debt as large as 100 per cent of GDP, he also agreed, "does depend upon people's belief that you will behave responsibly, and that is somewhat in question". As recent events have shown, that is a rather important caveat.
There is, of course, no way completely to refute Krugman's central claim that additional borrowing would have produced a more rapid recovery, since that claim is a counterfactual claim based solely on his models. It may be true, as he asserted last month, that a stimulus bill three times larger than the one actually passed in 2009 - i.e. a stimulus totaling $1.76 trillion - would have propelled the U.S. economy out of the liquidity trap. It may be true that the resulting increase of the federal debt would have been just "about $1 trillion", or 6% of GDP. And it may be true that this would have had no negative side effects, for example on the creditworthiness of the United States.
Then again, it may not be true. Other equally eminent economists have taken a much less sanguine view of this "vulgar Keynesianism", openly questioning his back-of-envelope calculations about a mega-stimulus. In a parallel debate about the policy options open to the United Kingdom, there seems a rather stronger argument against additional borrowing even in terms of Krugman's own "simplish IS-LM model". And, as Greg Mankiw has pointed out, in an earlier debate about the wisdom of deficit finance - when the President was a Republican, not a Democrat, and when the stimulus took the form of tax cuts and spending on war - Krugman himself took the diametrically opposite position.
In 2003, he warned, the United States was heading for "a fiscal train wreck":
The Congressional Budget Office operates under ground rules that force it to wear rose-colored lenses. If you take into account - as the C.B.O. cannot - the effects of likely changes in the alternative minimum tax, include realistic estimates of future spending and allow for the cost of war and reconstruction, it's clear that the 10-year deficit will be at least $3 trillion. ... We're looking at a fiscal crisis that will drive interest rates sky-high. ... because of the future liabilities of Social Security and Medicare, the true budget picture is much worse than the conventional deficit numbers suggest. ...
I'm terrified about what will happen to interest rates once financial markets wake up to the implications of skyrocketing budget deficits. ... my prediction is that politicians will eventually be tempted to resolve the crisis the way irresponsible governments usually do: by printing money, both to pay current bills and to inflate away debt.
And as that temptation becomes obvious, interest rates will soar. ... I think that the main thing keeping long-term interest rates low right now is cognitive dissonance.
For the record: the federal debt in public hands back then was $2.8 trillion, 35% of GDP. Today it is approaching $12 trillion, more than double as a share of GDP. The projected ten-year deficit then was, as Krugman states, $3 trillion; now, the equivalent figure is more than double that. And yet today we supposedly have no "train wreck" to worry about; indeed, it would be fine if the debt were a trillion dollars bigger.
Yes, I know, that was then and this is now; this time is different, we're in a liquidity trap, and all that. But what about 2008, the year the crisis began? For some strange reason, at that time Krugman vehemently opposed the presidential candidate arguing for - as he himself acknowledged - the larger fiscal stimulus. His name was John McCain - "McCain the Destroyer", as Krugman crudely called him. Four years later, Krugman explicitly acknowledged that Mitt Romney's (admittedly vague) fiscal plans would "blow up the deficit" and - shamelessly using an argument he had previously derided - compared the United States in such a high-deficit scenario with Greece.
In short, if Paul Krugman truly has won "a stunning victory" in "an epic intellectual debate" - as he recently claimed - it appears to have been over ... Paul Krugman.
The question is why, in the light of these numerous contradictions, anyone should be expected to share Krugman's certainty that a bigger stimulus would have worked. He was equally certain that there would be a dollar crisis - before the financial crisis produced the opposite effect. He was even more certain that the euro would break up - until it survived. Leave aside the glaring issue of ideological bias. Even if Krugman were not viscerally partisan, the point is that in the realm of macroeconomic prediction, self-confidence can only ever be a bluff. Judging by their past performance, Krugman's models are less reliable than tossing a coin.
Not being convinced of my own infallibility, I had no difficulty admitting that I had been wrong about the trajectory of U.S. interest rates back in 2009 and he had been right. Yet somehow this was not enough for Krugman, who could not bring himself to give me a proper apology for alleging falsely that I had never acknowledged my mistake. Instead, he has gone on claiming ad nauseam that I have "derped" (repeatedly said something wrong) on the question of inflation. In fact I wrote one column on this subject, in May 2011, and this was how it concluded:
Maybe in June, when the Fed stops quantitative easing ... inflation will recede. Maybe high fuel prices will, as Goldman Sachs predicts, slow the economy and revive the specter of deflation. Maybe. Or maybe inflation expectations [have] started shifting ...
(Admittedly, something of a Krugman hedge, that conclusion. But hardly a confident prediction of higher inflation. And I certainly did not repeat it the way Krugman repeated his predictions of euro collapse.) Meanwhile, applying his signature double standard, he himself has admitted elsewhere that "the only big thing I got wrong ... was in underestimating the stickiness of wages, and hence inflation, and therefore overestimating the risks of actual deflation".
When Paul Krugman first began his attacks against me, he made it clear - as if almost proud of the fact - that he had read none of my books. (Quote: "I'm told that some of his straight historical work is very good.") This was a mistake on his part. I have read his books. If he had read mine, he would perhaps have thought twice about seeking to discredit me on the basis of a few articles and interviews.
Krugman's unabashed ignorance of my academic work raises the question of what, in fact, he does read, apart from posts by the other liberal bloggers who are his zealous followers. There is a ratio that really would be good to have as a metric of the seriousness of a public intellectual. It is the ratio of words read to words written. Ideally, I would say, that ratio should be between 100 and 1000 to 1. But in the case of the Invincible Krugtron, I begin to suspect it has now fallen below unity. (When he does read a book, he mentions it in his blog as if it's a special holiday treat.)
In the past few days, I have pointed out that he has no right at all to castigate me or anyone else for real or imagined mistakes of prognostication. But the fact that Paul Krugman is often wrong is not the most important thing. It is his utter disregard for the norms of civility that is crucial here. I am not alone in being dismayed by Krugman's "spectacularly uncivil behavior". It is "my duty, as I see it, is to make my case as best I honestly can," Krugman has written, "not [to] put on a decorous show of civilized discussion." Well, I am here to tell him that "civilized discussion" matters. It matters because vitriolic language of the sort he uses is a key part of what is wrong with America today. As an eminent economist said to me last week, people are afraid of Krugman. More "decorous" but perhaps equally intelligent academics simply elect not to enter a public sphere that he and his parasitical online pals are intent on poisoning. I agree with Raghuram Rajan, one of the few economists who authentically anticipated the financial crisis: Krugman's is "the paranoid style in economics":
All too often, the path to easy influence is to impugn the other side's motives and methods ... Instead of fostering public dialogue and educating the public, the public is often left in the dark. And it discourages younger, less credentialed economists from entering the public discourse.
Where I come from, however, we do not fear bullies. We despise them. And we do so because we understand that what motivates their bullying is a deep sense of insecurity. Unfortunately for Krugtron the Invincible, his ultimate nightmare has just become a reality. By applying the methods of the historian - by quoting and contextualizing his own published words - I believe I have now made him what he richly deserves to be: a figure of fun, whose predictions (and proscriptions) no one should ever again take seriously.
Niall Ferguson's latest book is The Great Degeneration (Penguin Press).
© Niall Ferguson 2013
As I pointed out yesterday, Paul Krugman's right to consign others to the "Always-Wrong Club" , and routinely to insult anyone who dares to disagree with him, is fatally vitiated by his own embarrassingly bad record of commentary on the European phase of the financial crisis. His repeated and erroneous predictions of the European Monetary Union's imminent collapse constitute a perfect example of what he and his cronies childishly call "derping": to "take a position and refuse to alter that position no matter how strongly the evidence refutes it, who continue to insist that they have The Truth despite being wrong again and again".
Regrettably, Krugman - also known to himself and his cronies as "the Invincible Krugtron" - has not found time in his busy schedule of blogging to make the apologies that I believe are due, not only for his incivility and hypocrisy, but also for his own personal contribution to the crisis of confidence that afflicted Europe in 2011 and 2012. Seldom in the history of the economics profession can one man in a crowded theater have shouted fire more often and more loudly, apparently indifferent to the real economic consequences of his actions.
Why, you may ask, did Krugman feel the need to be so bold (and so wrong) in predicting the euro's collapse over and over again, in his column, on his blog and to every media outlet that would give him an interview? The answer is because he and his beloved economic models had so completely failed to predict the U.S. financial crisis and he did not want to repeat his mistake.
In 2006, the year before the financial crisis began, Krugman had a twice-weekly New York Times column. What a perfect opportunity, one might have thought, for the infallible Nobel laureate and author of Depression Economics to warn his readers about the gathering storm. Retrospectively, Krugman pats himself on the back. "How did I do?" And the answer is, not too badly. ... I've had a pretty good stretch." In fact, only eight of Krugman's more than a hundred columns in 2006 referred to the bubble in the U.S. housing market and the danger posed by its bursting. The key word "subprime" did not appear in his column until March 2007. Nearly everything else was a partisan rant of one sort or another against the policies of the administration of George W. Bush.
As an economist honored by the Swedish central bank for his work on trade theory, and as someone who had also cut his teeth as a commentator on the 1997-8 Asian Crisis, Krugman made the mistake of thinking the trade deficit and therefore the dollar were crucial. Here is a revealing passage from a column he wrote in February 2006:
Sooner or later the trade deficit will have to come down ... and both American consumers and the U.S. government will have to start living within their means. So how bad will it be? ... A "soft landing" looks unlikely, because too many economic players have unrealistic expectations. This is true of international investors, who are still snapping up U.S. bonds at low interest rates, seemingly oblivious both to the budget deficit and to the consensus view among trade experts that the dollar will eventually have to fall 30 percent or more to eliminate the trade deficit.
Many other economists had been making predictions like that for years. Yet this was the one thing that didn't happen in the financial crisis. On the contrary, when the crisis struck, investors' appetites for U.S. bonds and dollars surged.
Interestingly, in those days Krugman dismissed an argument of which he is now inordinately fond - that low bond yields signal a capacity for additional government borrowing: "Of course," he wrote in April 2006, "optimists have a comeback: if things are really that bad, why are so many foreign investors still buying U.S. bonds? And they point out that those predicting problems from the trade deficit have been wrong so far. But I have two words for those who place their faith in the judgment of investors, and believe that a few good years are enough to prove the skeptics wrong: Nasdaq 5,000." These days, it is Krugman who repeatedly assures his readers that low bond yields are proof that "deficit scolds" are wrong - in fact, they are an invitation to the Treasury to run a bigger deficit. I have one word for him: hypocrite.
To be sure, Krugman identified the house price bubble as a potential problem. But he failed to understand the nature of the problem. With typical "Econ 101" thinking, he predicted that house prices would fall, and that "spending will suddenly drop off as both the bond market and the housing market experience rude awakenings". He consistently failed to understand that the subprime crisis was financial in nature. Its macroeconomic effects would be huge not because of a reverse wealth effect as falling house prices depressed consumption, but because rising defaults on subprime mortgages would drastically affect all the structured financial products that used them directly or indirectly as collateral, and that the highly leveraged holders of such products - banks, in particular - would find themselves first illiquid and then insolvent.
Back on the eve of the crisis, Krugman also blew hot and cold on inflation, which I suppose is one way of being "right about everything". In April 2005, for example, he detected "A Whiff of Stagflation", but by June the following year he had changed his tune. Now, inflation was a "phantom menace" and - perversely given his concerns about the housing bubble - he worried that the Fed was wrong to tighten monetary policy. A general rule of Krugman's journalism is that monetary policy is never too loose and central bankers are always tightening too early. Another rule is that if someone erroneously anticipates higher inflation, that person is a member of the Always-Wrong Club. Who knew that he himself founded that club in 2005?
By August 2006, Krugman's diagnosis of the risk of a coming recession was correct in only one respect: that he thought there was a risk. In every other respect he was wrong:
The forces that caused a recession five years ago never went away. Business spending hasn't really recovered from the slump it went into after the technology bubble burst: nonresidential investment as a share of GDP, though up a bit from its low point, is still far below its levels in the late 1990's. Also, the trade deficit has doubled since 2000, diverting a lot of demand away from goods produced in the United States. ... [But] now, for the first time, problems in the housing market are starting to seriously reduce economic growth: the latest GDP data show real residential investment falling at an accelerating pace. ... based on what we know now, there's an economic slowdown coming. This slowdown might not be sharp enough to be formally declared a recession.
Once again, he was looking at the wrong dials on the dashboard, completely missing the financial implications of falling house prices.
Later that same month, an increasingly nervous Krugman gave an unexpected but qualified endorsement to Nouriel Roubini of NYU, "the only well-known economist flatly predicting a housing-led recession in the coming year. ... While I don't share Mr. Roubini's certainty, I see his point." Krugman's first reference to the role of "irresponsible bank lending" - and the Fed's failure to "crack down" on the banks - came on October 30, 2006. Note, however, that as late as December 1, 2006, he was still giving "roughly even odds that we're about to experience a formal recession". And when Krugman tried to imagine how the crisis would play out, with a piece published in March 2007 under the dateline "Feb. 27, 2008", his predictions were laughable. According to the man who would come to be known as the Invincible Krugtron:
1. The crisis would begin with a stock market crash ... in China.
2. Then it would spread to the junk bond market.
3. Then would come the defaults on subprime mortgages.
4. But the crisis would only be really big if "large market players, hedge funds in particular, [had] taken on so much leverage - borrowing to buy risky assets - that the falling prices of those assets would set off a chain reaction of defaults and bankruptcies".
Still capable of caution in those days, Krugman concluded the column by wimping out. "I'm not saying that things will actually play out this way," he wrote. "But if we're going to have a crisis, here's how." As we all know, the crisis still hasn't come to China (though the Shanghai stock market fell, the macro consequences were minimal), junk bonds were not crucial, and it was the losses of banks, not hedge funds, that did the most serious damage.
As late as January 2008 - again looking myopically at trade data - he was arguing that the U.S. economy had "dodged a bullet ... which is why I haven't been as sure about a looming recession as, say, Larry Summers or Marty Feldstein, let alone Nouriel Roubini ... I'm actually uncertain about where things go this year." Nor was he "nearly as pessimistic" as Carmen Reinhart and Kenneth Rogoff, whose seminal paper on the likely huge macroeconomic impact of the subprime crisis appeared that February. It was only in March that Krugman finally grasped the financial nature of the crisis. "A lot of people", he wrote, "saw that there was a huge housing bubble":
What's going on now, however, is beyond that: the "financial accelerator," with deleveraging causing a credit crunch that forces further deleveraging, and now threatens to produce a sort of pancake collapse of the whole system, was not, I think, so widely foreseen. I don't think many people saw how much the system itself would break down.
No, not many economists did see the complex interrelationships between the subprime mortgage market, collateralized debt obligations, highly leveraged banks and international derivatives markets that were the key to the scale of the crisis and its macroeconomic impact. But at least one historian did here and here and here and here and here. And here. I don't claim to be always right. But always wrong?
One might have expected a little more humility from an economist who so clearly failed to understand the nature of the biggest financial crisis of his lifetime until after it had happened. Or at least a little less egomania: "Yes," he wrote in January, "I've heard about the notion that I should be Treasury Secretary. I'm flattered, but it really is a bad idea." Gee, Professor Krugman, why do you say that?
It would mean taking me out of a quasi-official job that I believe I'm good at and putting me into one I'd be bad at. ... An op-ed columnist at the [New York] Times ... [can] have a lot more influence on national debate than, say, most senators. Does anyone doubt that the White House pays attention to what I write? ... By my reckoning ... an administration job, no matter how senior, would actually reduce my influence.
Not to mention smugness:
Obviously I'm plenty combative, and in a way still ambitious too; I do track my Twitter followers, wonder how each column will do on the most-emailed list, and all that. But there are no promotions I'm seeking, no honors I desperately desire that I don't already have. ... I'm wonderfully relaxed: no more steps to climb, no more boxes to check. I just do what I feel I should, and try to have some fun along the way. I'm a very lucky guy.
I confess I am at a loss to understand the basis for this self-satisfaction. If Krugman was wrong about the origins of the crisis, and wrong about the fate of the euro - wrong, in short about the two biggest crises of our time - what exactly was he right about? What, besides the doubtless large number of his Twitter followers, entitles him to be so pleased with himself?
That is a question I shall answer tomorrow, in the third and final part of this series.
Niall Ferguson's latest book is The Great Degeneration (Penguin Press).
© Niall Ferguson 2013
It's an ill wind that blows no one any good. The financial crisis that came to a head five years ago with the failure of Lehman Brothers has been especially beneficial to the economist Paul Krugman. In his widely read New York Times column and blog, Krugman regularly boasts that he has been "right" about the crisis and its consequences. "I (and those of like mind)," he wrote in June last year, "have been right about everything." Those who dare to disagree with him -- myself included -- he denounces as members of the "Always-Wrong Club." Readers of his blog have just been treated to another such sneer.
"Maybe I actually am right," Krugman wrote back in April, "and maybe the other side actually does contain a remarkable number of knaves and fools. ... Look at the results: again and again, people on the opposite side prove to have used bad logic, bad data, the wrong historical analogies, or all of the above. I'm Krugtron the Invincible!" That last allusion is to the 1980s science fiction superhero, Voltron. The resemblance between Krugman and Voltron was suggested by one of the gaggle of bloggers who are to Krugman what Egyptian plovers are to crocodiles. Yesterday one of these thought, wrongly, that he had caught me out. Unwisely, the crocodile snapped its jaws shut.
As a Princeton professor and Nobel Prize winner, Krugman is indeed widely believed to be intellectually invincible. He himself acknowledges having made only two mistakes, both predating the crisis: the impact of information technology on productivity, which he underestimated, and the significance of the federal deficits of the Bush administration, which he overestimated. "In the Great Recession and aftermath, however, I went with [my] models -- and they worked!"
"Let those who are without error cast the first stone," Krugman wrote back in 2010. Unfortunately, this is not an injunction he himself has heeded. Repeatedly, over the last five years, he has heaped opprobrium on others. His latest performance is characteristic; perhaps not quite intentionally he even refers to "my own unpleasantness with Ferguson".
Let us leave -- for the moment -- the question of the future size of the federal debt, which I have dealt with elsewhere and shall return to in a subsequent article. My purpose here is simply to challenge Krugman's right to behave in this way. Even if he were nearly always right, there would be no justification for his lack of civility. But he is not nearly always right. There is therefore no justification for his unshakeable certainty either.
Krugman reserves a special contempt for people who, in his words, "take a position and refuse to alter that position no matter how strongly the evidence refutes it, who continue to insist that they have The Truth despite being wrong again and again." He calls this "derping." The awkward thing for Krugman is that "being wrong again and again" perfectly characterizes his own commentary on what proved to be one of the crucial issues of the financial crisis: whether or not Europe's monetary union would survive it.
To begin with, Krugman was blithely confident that Europe would weather the economic storm better than the United States. On January 11, 2008, he hailed it as "The Comeback Continent":
... Since 2000, employment has actually grown a bit faster in Europe than in the United States ... If you think Europe is a place where lots of able-bodied adults just sit at home collecting welfare checks, think again. ... Europe's economy looks a lot better now - both in absolute terms and compared with our economy - than it did a decade ago.
Krugman explained Europe's comeback in terms of "deregulation", a more competitive broadband market than the U.S., "strong social safety nets" and "very high taxes." On May 19, 2008, after a visit to Berlin, he even told his faithful readers: "I have seen the future, and it works ... in the heart of 'old Europe'." (Admittedly this column was a standard "peak oil" piece, exhorting to Americans to have German-style cars and public transport, as opposed to, say, developing new technology to unlock hitherto inaccessible domestic supplies of oil and natural gas.)
Finally, in December 2008, Krugman woke up to the fact that the "Comeback Continent" was in fact an "economic mess." But what kind of mess? No, not the mess of excessively leveraged and effectively insolvent banks that had maxed out on CDOs, bubbly real estate and Club Med government bonds. The mess Krugman discerned was the failure of the German government to see "the need for a large, pan-European fiscal stimulus." The main thing, he wrote in March 2009, was not to make the mistake of thinking that "big welfare states are ... the cause of Europe's current crisis. In fact ... they're actually a mitigating factor." It was a theme he returned to when he and I debated the crisis in New York three months later, when he argued that "the human suffering [was] going to be much greater on this side of the Atlantic" because of Europe's "strong social safety net." Even in January 2010 he was still insisting that:
The real lesson from Europe is actually the opposite of what conservatives claim: Europe is an economic success, and that success shows that social democracy works. ... taking the longer view, the European economy works; it grows; it's as dynamic, all in all, as our own.
All of this sheds (to say the least) interesting light on Krugman's boast in an interview in March of this year to have been one of the few commentators who had "predicted the unfolding economic disaster in Europe." This is by no means the only retrospective prediction Krugman has ever made, but it is surely the most shameless.
The European crisis had in fact begun in December 2009, while Krugman was still celebrating Europe's economic success, when the newly elected Socialist government in Greece revealed the full extent of the country's fiscal crisis. The Invincible Krugtron was on the scene in a flash -- well, two months later: "Lack of fiscal discipline isn't the whole, or even the main, source of Europe's troubles -- not even in Greece. ... The real story behind the euromess lies not in the profligacy of politicians but in the arrogance of elites ..."
Wait, what about the Comeback Continent, where social democracy was the future that worked? Never mind about that, there's a crisis and Krugtron's help is urgently needed! And boy did he help.
The question of whether the euro was going to blow up imminently was surely the biggest call of the last few years. Fear of another Lehman-style shock froze credit markets and paralyzed policymakers. Was this just an outside risk over the long term, or a disaster that was almost upon us? Faithful readers of Krugman's New York Times column knew the answer.
By my reckoning, Krugman wrote about the imminent break-up of the euro at least eleven times between April 2010 and July 2012:
1. April 29, 2010: "Is the euro itself in danger? In a word, yes. If European leaders don't start acting much more forcefully, providing Greece with enough help to avoid the worst, a chain reaction that starts with a Greek default and ends up wreaking much wider havoc looks all too possible."
2. May 6, 2010: "Many observers now expect the Greek tragedy to end in default; I'm increasingly convinced that they're too optimistic, that default will be accompanied or followed by departure from the euro."
3. September 11, 2011: "the euro is now at risk of collapse. ... the common European currency itself is under existential threat."
4. October 23, 2011: "[the] monetary system ... has turned into a deadly trap. ... it's looking more and more as if the euro system is doomed."
5. November 10, 2011: "This is the way the euro ends ... Not long ago, European leaders were insisting that Greece could and should stay on the euro while paying its debts in full. Now, with Italy falling off a cliff, it's hard to see how the euro can survive at all."
6. March 11, 2012: "Greece and Ireland ... had and have no good alternatives short of leaving the euro, an extreme step that, realistically, their leaders cannot take until all other options have failed - a state of affairs that, if you ask me, Greece is rapidly approaching."
7. April 15, 2012: "What is the alternative? ... Exit from the euro, and restoration of national currencies. You may say that this is inconceivable, and it would indeed be a hugely disruptive event both economically and politically. But continuing on the present course, imposing ever-harsher austerity on countries that are already suffering Depression-era unemployment, is what's truly inconceivable."
8. May 6, 2012: "One answer - an answer that makes more sense than almost anyone in Europe is willing to admit - would be to break up the euro, Europe's common currency. Europe wouldn't be in this fix if Greece still had its drachma, Spain its peseta, Ireland its punt, and so on, because Greece and Spain would have what they now lack: a quick way to restore cost-competitiveness and boost exports, namely devaluation."
9. May 17, 2012: "Apocalypse Fairly Soon ... Suddenly, it has become easy to see how the euro - that grand, flawed experiment in monetary union without political union - could come apart at the seams. We're not talking about a distant prospect, either. Things could fall apart with stunning speed, in a matter of months."
10. June 10, 2012: "utter catastrophe may be just around the corner."
11. July 29, 2012: "Will the euro really be saved? That remains very much in doubt."
His most recent wrong call was that it was "a real possibility" that Cyprus would be "forced off the euro in the next few days." That was in March of this year -- shortly before a new Cypriot government reached an agreement for yet another bailout that kept it in the Eurozone.
True, Krugman was rarely unequivocal in predicting a euro breakup. Especially at the beginning of the crisis, he hedged, sometimes assigning "more or less even odds" to "a breakup of the euro, with major players, not just Greece, being forced out." That was in an interview with Playboy (seriously) back in February 2012. By May, however, he was more certain. While conceding to the Washington Post (presumably in jest) that his view of the euro's survival "depends on my mood," he stated: "As a matter of substantive economics? It's doomed." His confidence growing, he told the Belgian paper De Tijd: "I think Greece is too far-gone. I don't see a realistic possibility of making the euro work for them now." Radio Free Europe listeners were told that same month that a Greek exit was "probably something that will take place in months." "Mr. Krugman, does Greece have to leave the euro zone?" he was asked by Der Spiegel. "Yes," he replied. "I don't see too much alternative now." "I don't think they can save Greece," he told the Financial Times.
By now the Invincible Krugtron was on a roll. "Something has to happen and in the end it does have to be a Greek exit," he told a reporter from the Independent. "I'd be astonished if they can go more than two years without leaving. I'd be astonished if they could go even one year." Viewers of the BBC were next:
Krugman: I believe Greece will and must leave the euro. I think there is no alternative.
BBC: When do you think it will happen?
Krugman: It could happen in a few weeks in the next election of Greece [which was on June 17].
On more than one occasion -- on PBS in June last year for example, and in an interview the following month with Business Insider -- the formulation was conditional: if the Germans did not tolerate higher inflation, "then the euro will break up." But by September it was back to inevitable Greek exit: "I cannot see how this country can remain in the euro," he told L'Express. "It is practically impossible."
In all, I count a total of twenty-two statements of this sort, attaching probabilities of 50 percent and above to the scenario of one or more countries leaving the euro.
Now, I happen to be rather a euro-skeptic myself. I opposed the creation of the euro and predicted at the outset that the experiment of monetary union without fiscal integration would ultimately degenerate. But today, as you may have noticed, the euro is still intact. Indeed, the Eurozone has two more members than when the crisis began and in January will acquire yet another, Latvia. That is not to say that it won't fall apart eventually. But for the foreseeable future that remains a much lower probability scenario than its survival. I don't know which particular model Paul Krugman was using in the summer of 2012, but it certainly did rather a bad job of predicting what would happen. I laughed out loud at his recent lame excuse that his model couldn't have been expected to predict the action of the European Central Bank. What an awesome model: one that predicts everything about a monetary union except the action of the monetary authority.
Besides its wrongness, the other striking feature of Krugman's commentary on the euro is the vitriol he has directed against those struggling to cope with the crisis. In December 2011, he called the then Italian Prime Minister Mario Monti "delusional." In March of this year, incredibly, he appeared to liken the Finnish Vice President of the European Commission, Olli Rehn, to a cockroach. Some people, I have come to realize, are intimidated by this lack of civility. But I am with Dilbert. It's simply absurd for this man to accuse others of "derping," a childish neologism meaning -- in case you've forgotten -- to "take a position and refuse to alter that position ... despite being wrong again and again."
"I like to think," Krugman wrote on August 14, "that if I had been proved ... utterly wrong ... I'd have had the strength of character to admit it and question my premises. But I don't know for sure, and with some luck I'll never find out." Now that I have shown Krugtron the Invincible to have been utterly and repeatedly wrong about the euro, I look forward to reading his admission of error.
To be precise, I would like to see him admit that he got the biggest call of the last several years dead wrong, again and again and again. Not only should he admit his mistake, but he should also apologize to the millions of people who have suffered as a result of it. Or does he believe that his numerous, widely read predictions of imminent currency break-up had no impact whatever on the expectations of European investors and consumers?
Niall Ferguson's latest book is The Great Degeneration (Penguin Press).