The union is saved. Alex Salmond, Scotland’s nationalist first minister, has resigned. All the ink spilled on the benefits and costs of an independent Scotland can be consigned to counterfactual history. The only pressing question is the significance – and consequences – of the No vote.
Most commentary has been focused on UK politics. This is too parochial. The real significance of the No lies at European level. The result dents the hopes of other separatist movements in Spain, Italy and Belgium. The less obvious point is that we have witnessed another defeat for populism at the hands of the emergent Europe-wide grand coalition.
The Yes campaign was more than just the Scottish National party. Its unexpected late gains in the polls reflected the mobilisation of young voters and previous non-voters, especially in the underclass of Glasgow and its environs. What attracted those people was not the rather intricate proposition of political independence plus monetary union. It was an emotional appeal, a matter of saltire flags and “wha’s like us?” rhetoric.
Populism has been popping up all over Europe since the financial crisis. England’s version is the United Kingdom independence party. Last Sunday, the anti-immigrant Sweden Democrats doubled their share in the national parliament, while the anti-European Alternative for Germany party won seats in two more regional parliaments . In France, opinion polls suggest that the National Front’s Marine Le Pen has a serious shot at the presidency in 2017. The Dutch have Geert Wilders. Greece has Golden Dawn.
What all these different populists have in common is nationalism – along with a rather fishy admiration for Vladimir Putin, the Russian President and a model for the chauvinism-plus-authoritarianism combination that is the essence of populism in power.
With most of the continent still struggling to return to growth and the true character of Russian power in full view in Ukraine, populism is a problem Europe really could do without. To prevent the kind of irreversible mistake that Scotland has avoided, mainstream parties across the EU need to join forces.
In Britain, David Cameron, prime minister, has been under pressure to address the challenge posed by Ukip by moving to the right . Yet it was his joint effort with his predecessor as prime minister, Gordon Brown, that halted the SNP. The union was saved by a coalition government in a temporary coalition with Labour.
Bi-partisanship is something Americans believe in but do not practise. In Europe the opposite applies: coalitions across the left-right divide are unloved but increasingly ubiquitous. No fewer than 25 of the EU’s 28 member states are ruled by coalition governments .
The trend is even more evident in Brussels. Witness the recent distribution of the EU’s top jobs: the president-elect of the European Commission hails from the conservative EPP, while the president of the European Parliament is from the socialist PES. The proposed new commission includes representatives from all four of the mainstream European parties. The populists are out.
Grand coalitions used to be viewed as temporary expedients. When Germany’s Christian Democrats and Social Democrats joined forces for the first time in 1966, commentators feared it would lead to political instability. In fact, grand coalitions have turned out to bring stability. Would Germany, for instance, be better off with the alternative coalition of Social Democrats, Greens and the ex-communist Linke – which last year proposed to raise the top income tax bracket to 100 per cent?
In 2012, in the depths of the eurozone crisis, Greek voters were twice called to the ballot to decide between two mainstream parties and a multitude of populists ranging from neo-Nazis to Communists. When no party emerged with an outright majority, the two mainstream parties put aside four decades of animosity and formed a coalition under Antonis Samaras. This decision surely averted what would have been a disastrous exit from Europe’s monetary union.
In France, too, the prospect of a National Front victory in the 2017 presidential elections will ultimately force another pacte républicain: a commitment by the parties of both the centre-left and the centre-right to join forces against Ms Le Pen, regardless of which candidate ends up against her in the second round.
Populism is back; it is not about to go away. The wrong response is for mainstream parties to pander to the populists. The right response is for the centrists to join forces, hard though it is to bury their ancestral rivalries. I have long been identified with conservatism, though on many issues I am in fact a liberal. The advent of a new era of grand coalitions is good news for me. From now on, I no longer need to deny my allegiance to the extreme centre.
Fritz Lang’s silent movie classic Metropolis (1927) depicts the downfall of a hierarchical megacity. Metropolis is a city of skyscrapers. At the top, in their penthouse C-suites, lives a wealthy elite led by the autocrat Joh Fredersen. Down below, in subterranean factories, the proletariat toils. After he witnesses an industrial accident, Fredersen’s playboy son is awakened to the squalor and danger of working-class life. The upshot is a violent revolution and a self-inflicted if inadvertent disaster: When the workers smash the power generators, their own living quarters are flooded because the water pumps fail. Today, Metropolis is perhaps best remembered for the iconic female robot that becomes the doppelgänger of the heroine, Maria. Yet it is better understood as a metaphor for history’s fundamental dialectic between hierarchies and networks.
Lang said the film was inspired by his first visit to New York. To his eyes, the skyscrapers of Manhattan were the perfect architectural expression of a hierarchical and unequal society. Contemporaries, notably the right-wing media magnate Alfred Hugenberg, detected a communist subtext, though Lang’s wife, who co-wrote the screenplay, was a radical German nationalist who later joined the Nazi Party. Viewed today, the film transcends the political ideologies of the mid-20th century. With its multiple religious allusions, culminating in an act of redemption, Metropolis is modernity mythologized. The central question it poses is as relevant today as it was then: How can an urbanized, technologically advanced society avoid disaster when its social consequences are profoundly anti-egalitarian?
There is, perhaps, an even more profound question in the subtext of Lang’s film: Who wins, the hierarchy or the network? The greatest threat to the hierarchical social order of Metropolis is posed not by flooding but by a clandestine conspiracy among the workers. Nothing infuriates Fredersen more than the realization that this conspiracy was hatched in the catacombs beneath the city without his knowledge.
In today’s terms, the hierarchy is not a single city but the state itself, the vertically structured super-polity that evolved out of the republics and monarchies of early modern Europe. Though not the most populous nation in the world, the United States is certainly the world’s most powerful state, despite the limits imposed by checks (to lobbyists) and balances (as in bank). Its nearest rival, the People’s Republic of China, is usually seen as a profoundly different kind of state, for while the United States has two major parties and a gaggle of tiny ones, the People’s Republic has one and only one. American government is founded on the separation of powers, not least the independence of its judiciary; the PRC subordinates law, such as it has evolved in China over the centuries, to the dictates of the Communist Party.
Yet both states are republics, with roughly comparable vertical structures of administration and not wholly dissimilar concentrations of power in the hands of the central government. Economically, the two systems are certainly converging, with China looking ever more to market signals and incentives, while the United States keeps increasing the statutory and regulatory power of government over producers and consumers. And, to an extent that disturbs civil libertarians on both Left and Right, the U.S. government exerts control and practices surveillance over its citizens in ways that are functionally closer to contemporary China than to the America of the Founding Fathers.
To all the world’s states, democratic and undemocratic alike, the new informational, commercial, and social networks of the internet age pose a profound challenge, the scale of which is only gradually becoming apparent. First email achieved a dramatic improvement in the ability of ordinary citizens to communicate with one another. Then the internet came to have an even greater impact on the ability of citizens to access information. The emergence of search engines marked a quantum leap in this process. The advent of laptops, smartphones, and other portable devices then emancipated electronic communication from the desktop. With the explosive growth of social networks came another great leap, this time in the ability of citizens to share information and ideas.
It was not immediately obvious how big a challenge all this posed to the established state. There was a great deal of cheerful talk about the ways in which the information technology revolution would promote “smart” or “joined-up” government, enhancing the state’s ability to interact with citizens. However, the efforts of Anonymous, Wikileaks and Edward Snowden to disrupt the system of official secrecy, directed mainly against the U.S. government, have changed everything. In particular, Snowden’s revelations have exposed the extent to which Washington was seeking to establish a parasitical relationship with the key firms that operate the various electronic networks, acquiring not only metadata but sometimes also the actual content of vast numbers of phone calls and messages. Techniques of big-data mining, developed initially for commercial purposes, have been adapted to the needs of the National Security Agency.
The most recent, and perhaps most important, network challenge to hierarchy comes with the advent of virtual currencies and payment systems like Bitcoin. Since ancient times, states have reaped considerable benefits from monopolizing or at least regulating the money created within their borders. It remains to be seen how big a challenge Bitcoin poses to the system of national fiat currencies that has evolved since the 1970s and, in particular, how big a challenge it poses to the “exorbitant privilege” enjoyed by the United States as the issuer of the world’s dominant reserve (and transaction) currency. But it would be unwise to assume, as some do, that it poses no challenge at all.
Clashes between hierarchies and networks are not new in history; on the contrary, there is a sense in which they are history. Indeed, the course of history can be thought of as the net result of human interactions along four axes.
The first of these is time. The arrow of time can move in only one direction, even if we have become increasingly sophisticated in our conceptualization and measurement of its flight. The second is nature: Nature means in this context the material or environmental constraints over which we still have little control, notably the laws of physics, the geography and geology of the planet, its climate and weather, the incidence of disease, our own evolution as a species, our fertility, and the bell curves of our abilities as individuals in a series of normal distributions. The third is networks. Networks are the spontaneously self-organizing, horizontal structures we form, beginning with knowledge and the various “memes” and representations we use to communicate it. These include the patterns of migration and miscegenation that have distributed our species and its DNA across the world’s surface; the markets through which we exchange goods and services; the clubs we form, as well as the myriad cults, movements, and crazes we periodically produce with minimal premeditation and leadership. And the fourth is hierarchies, vertical organizations characterized by centralized and top-down command, control, and communication. These begin with family-based clans and tribes, out of which or against which more complex hierarchical institutions evolved. They include, too, tightly regulated urban polities reliant on commerce or bigger, mostly monarchical, states based on agriculture; the centrally run cults often referred to as churches; the armies and bureaucracies within states; the autonomous corporations that, from the early modern period, sought to exploit economies of scope and scale by internalizing certain market transactions; academic corporations like universities; political parties; and the supersized transnational states that used to be called empires.
Note that the environment is not wholly a given; it can be shaped by, as well as shape, humanity. It may well be that, in the foreseeable future, our species’ impact on the earth’s climate will become the dominant driver of history, but that is not yet the case. For now, the interactions of networks and hierarchies are more important. Networks are not planned by a single authority; they are the main source of innovation but are relatively fragile. Hierarchies exist primarily because of economies of scale and scope, beginning with the imperative of self-defense. To that end, but for other reasons too, hierarchies seek to exploit the positive externalities of networks. States need networks, for no political hierarchy, no matter how powerful, can plan all the clever things that networks spontaneously generate. But if the hierarchy comes to control the networks so much as to compromise their benign self-organizing capacities, then innovation is bound to wane.
Consider some examples of history along these four axes. The population of the entire Eurasian landmass was devastated by the Black Death of the 14th century, a natural disaster transmitted along trade networks. But the impact was very different in Europe compared with Asia. The main difference between the West and the East of Eurasia after 1500 was that networks in the West were much freer from hierarchical dominance than in the East. No monolithic empire rose in the West; multiple and often weak principalities prevailed. Printing existed in China long before the 15th century, but its advent in Germany was explosive because of the network effects generated by the rapid spread of Gutenberg’s easily replicated technology. The Reformation, which was printed as much as it was preached, unleashed a wave of religious revolt against the hierarchy of the Roman Catholic Church. It was only after prolonged and bloody conflict that the monarchies were able to re-impose their hierarchical control over the new Protestant sects.
European history in the 17th, 18th, and 19th centuries was characterized by a succession of network-driven waves of innovation: the Scientific Revolution, the Enlightenment, and the Industrial Revolution. In each case, the sharing of novel ideas within networks of scholars and tinkerers produced powerful and mainly positive externalities, culminating in the decisive improvements in economic efficiency and then life expectancy experienced in the British Isles, Western Europe, and North America from the late 18th century. The network effects of trade and migration were especially powerful, as European merchants and settlers exploited falling transportation costs to export their ideas, as well as their techniques and goods, to the rest of the world. Thanks to those ideas, this was also an era of political revolutions. Ideas about liberty, equality, and fraternity crossed the Atlantic as rapidly as pirated technology from the cotton mills of Lancashire. Kings were toppled, aristocracies abolished, and churches dissolved or made to compete without the support of a state.
Yet the 19th century saw the triumph of hierarchies over the new networks. This was partly because hierarchical corporations—which began, let us remember, as state-sponsored monopolies like the East India Company—were as important in the spread of industrial capitalism as horizontally structured markets. Firms could reduce the transaction costs of the market as well as exploit economies of scale and scope. The railways, steamships, and telegraph cables that made possible the first age of globalization had owners.
The key, however, was the victory of hierarchy in the realm of politics. Why revolutionary ideologies like Jacobinism and Marxism-Leninism so quickly produced highly centralized hierarchical political structures is one of the central puzzles of the modern era, though it was an outcome more or less accurately predicted by much classical political theory. Whatever the democratic aspirations of the revolutionaries, their ideologies ended up as sources of legitimation for autocrats who were markedly more power-hungry than the monarchs of the ancien régime.
True, the energies unleashed by the overthrow of the Bourbons were (just barely) insufficient to overcome those produced by the British synthesis of monarchism and the pursuit of Mammon, which restored or revived the continental monarchies, including, temporarily, the Bourbons themselves. But the old order was only partially restored. Napoleon had taught even his most ardent enemies an unforgettable lesson, as Clausewitz understood, about how an imperial leader could wield power by commanding a people in arms.
For a time it seemed that a modus vivendi had arisen between the new networks of science and industry and the old hierarchies of hereditary rule. Half the world fell under the sway of a dozen Western empires, and much of the rest was under their economic sway. But optimists, from Norman Angell to Andrew Carnegie, felt sure that these empires would not be so foolish as to jeopardize the benefits of international exchange. After all, it was partly by taxing the fruits of the first era of globalization that the empires could finance their vast armies, navies, and bureaucracies. This proved wrong. So complete was the imperial system of command, control, and communication that when the empires resolved to go to war with one another over arcane issues like the status of Bosnia-Herzegovina or the neutrality of Belgium, they were able to mobilize in excess of seventy million men as soldiers or sailors. In France and Germany about a fifth of the prewar population ended up in uniform, bearing arms.
The triumph of hierarchy over networks was symbolized by the complete failure of the Second International of socialist parties to prevent the World War. When the leaders of European socialism met in Brussels at the end of July 1914, they could do little more than admit their own impotence. What the Viennese satirist Karl Kraus called the alliance of “thrones and telephones” had marched the young men of Europe off to Armageddon. Those who thought the war would not last long underestimated the hierarchical state’s ability to sustain industrialized slaughter.
The mid 20th century was the zenith of hierarchy. Although World War I ended with the collapse of no fewer than four of the great dynastic empires—the Romanov, Habsburg, Hohenzollern, and Ottoman—they were replaced with astonishing swiftness by new and stronger states based on the normative paradigm of the nation-state, the ethno-linguistically defined anti-imperium.
Not only did the period after 1918 witness the rise of the most centrally controlled states of all time (Stalin’s Soviet Union, Hitler’s Third Reich and Mao’s People’s Republic); it was also an era in which hierarchies flourished in the economic, social and cultural spheres. Central planners ruled, whether they worked for governments, armies or large corporations. In Aldous Huxley’s Brave New World (1932), the Fordist World State controls everything from eugenics to narcotics and euthanasia; the fate of the non-conformist Bernard Marx is banishment. In Orwell’s Nineteen Eighty-Four (1949) there is not the slightest chance that Winston Smith will be able to challenge Big Brother’s rule over Airstrip One; his fate is to be tortured and brainwashed. A remarkable number of the literary heroes of the high Cold War era were crushed by one system or the other: from Heller’s John Yossarian to le Carré’s Alec Leamas to Solzhenytsin’s Ivan Denisovich.
Kraus was right: The information technology of mid-century overwhelmingly favored the hierarchies. Though the telegraph and telephone created vast new networks, they were relatively easy to cut, tap, or control. Newsprint, radio, cinema, and television were not true network technologies because they generally involved one-way communication from the content provider to the reader or viewer. During the Cold War the superpowers were mostly able to control information flows by manufacturing or sponsoring propaganda and classifying or censoring anything deemed harmful. Sensation surrounded every spy scandal and defection; yet in most cases all that happened was that classified information was passed from one national security state to the other. Only highly trained personnel in governmental, academic, or corporate research centers used computers, and those were anything but personal computers. The self-confidence of the technocrats at that time is nicely exemplified by MONIAC (the Monetary National Income Analogue Computer), a hydraulic device designed by Bill Phillips (of Phillips Curve fame) that was supposed to simulate the effects of Keynesian economic policy on the UK economy.
There were moments of truth, particularly in the 1970s, when classified information reached the public through the free press in the West or through samizdat literature in the Soviet bloc. Yet the striking feature of the later Cold War was how well the national security state managed to withstand exposures like the report of the Church Committee or the publication of the Gulag Archipelago. George H.W. Bush, appointed head of the Central Intelligence Agency in 1976—in the midst of the Church Committee’s work—went on to serve as Vice President and President. Within a decade of the collapse of the Soviet Union, the Russian Federation had a former KGB operative as its President. The Pentagon proved to be mightier than the Pentagon Papers.
Today, by contrast, the hierarchies seem to be in much more trouble. The most obvious challenge to established hierarchies is the flow of information unleashed by the advent of the personal computer, email, and the internet, which have allowed ordinary citizens to organize themselves into much larger and more dispersed networks than has ever been possible before. The PC has empowered the individual the way the book did after the 15th-century breakthrough in printing. Indeed, the trajectories for the production and price of PCs in the United States between 1977 and 2004 are remarkably similar to the trajectories for the production and price of printed books in England from 1490 to 1630. The differences are that our networking revolution is much faster and that it is global.
In a far shorter space of time than it took for 84 percent of the world’s adults to become literate, a remarkably large proportion of humanity has gained access to the internet. Although its origins can be traced back to the late 1960s, the internet as a system of interconnected computer networks did not really begin until the standard protocol suite (TCP/IP) was adopted at universities in the 1980s. As recently as 1998 only around 2 percent of the world’s population were internet users. Today the proportion is 39 percent; in the developed world, 77 percent.
Google was incorporated in 1998. Its first premises were a garage in Menlo Park. Today its has the capacity to process more than a billion search requests and 24 petabytes of user-generated data every day. Facebook was founded at Harvard ten years ago. Today it has 1.23 billion regular users a month. Twitter was created eight years ago. Now it has 200 million users, who send more than 400 million tweets daily.
The challenge these new networks pose to established hierarchies is threefold. First, they vastly increase the volume of information to which citizens can have access, as well as the speed with which they can have access to it. Second, they empower individual citizens to publicize things that might otherwise remain secret or known only to a few. Edward Snowden and Daniel Ellsberg did the same thing by making public classified documents, but Snowden has already revealed much more than Ellsberg and to vastly more people, while Julian Assange, the founder of WikiLeaks, has far out-scooped Carl Bernstein and Bob Woodward (even if he has not yet helped to bring down an American President). Third, and perhaps most importantly, the networks expose by their very performance the inefficiency of hierarchical government.
Politicians and voters remain the captives of a postwar campaign vocabulary in which the former pledge to the latter that they will provide not just additional public goods but also “create jobs” without significantly increasing the cost to most voters in terms of taxation. The history of President Barack Obama’s Administration can be told as a series of pledges to increase employment (“the stimulus”), reduce the risk of financial crisis, and provide universal health insurance. The President’s popularity has declined fastest when, as with the Patient Protection and Affordable Care Act, the inability of the Federal government to fulfill these pledges efficiently has been most exposed. The shortcomings of the website Healthcare.gov in many ways epitomized the fundamental problem: In the age of Amazon, consumers expect basic functionality from websites. Daily Show host Jon Stewart spoke for hundreds of thousands of frustrated users when he taunted former Health and Human Services head Kathleen Sebelius: “I’m going to try and download every movie ever made, and you’re going to try to sign up for Obamacare, and we’ll see which happens first.”
Yet the trials and tribulations of “Obamacare” are merely a microcosm for a much more profound problem. The modern state, at least in its democratic variant, has evolved a familiar solution to the problem of increasing the provision of public goods without making proportionate increases to taxation, and that is to finance current government consumption through borrowing, while at the same time encouraging citizens to increase their own leverage by various fiscal incentives, such as the deductibility of mortgage interest payments. The vast increase of private debt that preceded the financial crisis of 2008 was succeeded by a comparably vast increase in public debt. At the same time, central banks took increasingly unorthodox steps to shore up tottering banks and plunging asset markets by purchases of securities in exchange for excess reserves. With short-term interest rates at zero, “quantitative easing” was designed to keep long-term interest rates low too. The financial world watches with bated breath to see how QE can be “tapered” and when short-term rates will be raised. Most economists nevertheless take for granted the U.S. government’s ability to print its own currency without limit. Many assume that this offers some relatively easy way out of trouble if rising interest rates threaten to make debt service intolerably burdensome. But this assumption may be wrong.
Since ancient times, states have exploited their ability to issue currency, whether coins stamped with the king’s likeness or electronic dollars on a screen. But if the new networks are in the process of creating an alternative form of money, such as Bitcoin purports to be, then perhaps the time-honored state privilege to debase the currency is at risk. Bitcoin offers many advantages over a fiat currency like the U.S. dollar. As a means of payment—especially for online transactions—it is faster, cheaper, and more secure than a credit card. As a store of value it has many of the key attributes of gold, notably finite supply. As a unit of account it is having teething troubles, but that is because it has become an attractive speculative object. It is too early to predict that Bitcoin will succeed as a parallel currency, but it is also too early to predict that it will fail. In any case, governments can fail, too.
Where governments fail most egregiously, new networks may well increase the probability of successful revolution. The revolutionary events that swept the Middle East and North Africa beginning in Tunisia in December 2010—the so-called Arab Spring—were certainly facilitated by various kinds of information technology, even if for most Arabs it was probably the television channel Al Jazeera more than Facebook or Twitter that spread the news of the revolution. Most recently, the revolutionaries in Kiev who overthrew Ukrainian President Viktor Yanukovych made effective use of social networks to organize their protests in the Maidan and to disseminate their critique of Yanukovych and his cronies.
Yet it would be naive to assume that we are witnessing the dawn of a new era of free and equal netizens, all empowered by technology to speak truth to (and about) power, just as it would be naive to assume that the hierarchical state is doomed, if not to revolutionary downfall then at least to a permanent diminution of its capacity for social control.
Modern networks have prospered, paradoxically, in ways that are profoundly inegalitarian. That is because ownership of the information infrastructure and the rents from it are so concentrated. Google at the time of writing is worth $359 billion by market capitalization. About 16 percent of its shares, worth $58 billion, are owned by its founders, Larry Page and Sergey Brin. The market capitalization of Facebook is $161 billion; 28 percent of the shares, worth $45 billion, are owned by its founder Mark Zuckerberg. If Thomas Piketty needs further proof of his thesis that the world is reverting to the inequality of a century ago because, absent world wars and revolutions, the rate of return on capital (and the rate of growth of executive compensation) tends to outstrip the rate of growth of aggregate income, it is there in abundance in Silicon Valley. Granted, the young and very wealthy people who literally own the modern networks tend to have somewhat liberal political views. A few of them are libertarians. But few of them would welcome Gallic rates of taxation, much less a French-style egalitarian revolution.
At the same time, the hierarchical has not been slow to appreciate the opportunities that the new social networks present. Edward Snowden’s most startling revelation was the complicity of companies like Google, Apple, Yahoo, and Facebook in the National Security Agency’s global surveillance programs, notably PRISM. It is all very well for Mark Zuckerberg to complain that he has been “so confused and frustrated by the repeated reports of the behavior of the U.S. government” and to declare self-righteously: “When our engineers work tirelessly to improve security, we imagine we’re protecting you against criminals, not our own government.” But he knows full well that since at least 2009 Facebook has responded to tens of thousands of U.S. government requests for information about Facebook users. If not for Snowden’s leaks, we would not have known just how freely the NSA was making use of the provisions of the Foreign Intelligence Surveillance Act.
The owners of the networks are also well aware that plotting jihad is not the principal use to which their technology is put, any more than plotting revolution is. They owe their security much more to network surfers’ apathy than to the NSA. Most people do not go online to participate in flash mobs. Most women seem to prefer shopping and gossiping; most men prefer sports and pornography. All those neural quirks produced by evolution make us complete suckers for the cascading stimuli of tweets, Instagrams, and Facebook pokes from members of our electronic kinship group. The networks cater to our solipsism (selfies), our short attention spans (140 characters), and our seemingly insatiable appetite for “news” about “celebrities.”
In the networked world, the danger is not popular insurrection but indifference; the political challenge is not to withstand popular anger but to transmit any kind of signal through the noise. What can focus us, albeit briefly, on the tiresome business of how we are governed or, at least, by whom? When we speak of “populism” today, we mean simply a politics that is audible as well as intelligible to the man in the street. Not that the man in the street is actually in the street. Far more likely, he is the man slumped on his sofa, his attention skipping fitfully from television to laptop to tablet to smartphone and back to television. And what gets his attention? The end of history? The clash of civilizations? The answer turns out to be the narcissism of small differences.
Liberals denounce conservatives with astonishing vituperation; Republicans inveigh against Democrats. But to the rest of the world what is striking are the strange things nearly all Americans agree about (for example, that children should be packed off to camps in the summer). Many English people are outraged about immigrant Romanians. But to East Asian eyes the English are scarcely distinguishable from Romanians. (Indeed, in many parts of formerly working-class England people live much as the reviled Roma are alleged to: in squalor.)
It is no accident that most of the world’s conflicts today are not between civilizations, as Samuel Huntington foresaw, but between neighbors. That, after all, is what is really going on in Syria, Iraq, and the Central African Republic, not to mention Ukraine. Can anyone other than a Russian or a Ukrainian tell a Russian and a Ukrainian apart? And yet how readily one is pitted against the other, and how distractingly.
At times, it can seem as if we are condemned to try to understand our own time with conceptual frameworks more than half a century old. Since the financial crisis that began in 2007, many economists have been reduced to recycling the ideas of John Maynard Keynes, who died in 1946. At the same time, analysts of international relations seem to be stuck with terminology that dates from roughly the same period: “realism” or “idealism”, containment or appeasement. (George Kennan’s “Long Telegram” was dispatched just two months before Keynes’s death.)
Yet our own time is profoundly different from the mid-20th century. The near-autarkic, commanding and controlling states that emerged from the Depression, World War II, and the early Cold War exist only as pale shadows of their former selves. Today, the combination of technological innovation and international economic integration has created entirely new forms of organization—vast, privately owned networks—that were scarcely dreamt of by Keynes and Kennan. We must ask ourselves: Are these new networks really emancipating us from the tyranny of the hierarchical empire-states? Or will the hierarchies ultimately take over the networks as they did a century ago, in 1914, successfully subordinating them to the priorities of the national security state?
A libertarian utopia of free and equal netizens—all networked together, sharing all available data with maximum transparency and minimal privacy settings—has a certain appeal, especially to the young. It is romantic to picture these netizens, like the workers in Lang’s Metropolis, spontaneously rising up against the world’s corrupt hierarchies. Yet the suspicion cannot be dismissed that, despite all the hype of the Information Age and all the brouhaha about Messrs. Snowden and Assange, the old hierarchies and new networks are in the process of reaching a quiet accommodation with one another, much as thrones and telephones did a century ago. We shall all know what it means when (as begins to be imaginable) Sheryl Sandberg leans all the way into the White House. It will mean that Metropolis lives on.
Political backlash usually follows economic crisis. Everyone knows how the Great Depression fuelled support for extremists on both the left and right. Less well known is the way the original Great Depression – the one that began in 1873 and involved a quarter-century of deflation – led to a wave of populism on both sides of the Atlantic. Could this history be repeating itself?
Some 1930s-style fascists are out there, notably in Greece, Hungary and further east. Yet for most Europeans and Americans, fascism is a toxic brand. Far more common are movements that echo the populism of the late-19th century.
Today’s populists are a motley crew of xenophobes, nationalists and cranks – just like their predecessors. Causes dear to 1870s populists ranged from anti-Semitism to bimetallism. Nowadays anti-immigration and euroscepticism are more likely. In America, the financial crisis begat the Tea Party. Europe’s equivalent looks like a populist wave, which many expect to break spectacularly in the upcoming EU parliamentary elections. The real story will be more surprising. Despite the severity of the shocks inflicted on European economies since 2008, most voters will back mainstream parties. Unlike in the 1930s, but as in the 1870s, the centre will hold.
You can see the populists’ media appeal. Compared with the men and women in suits of mainstream European politics, Nigel Farage – the smoking, boozing leader of the UK Independence party – is a newspaper editor’s dream. The same goes for Marine Le Pen, the blonde bombshell of France’s National Front.
As those contrasting examples suggest, there is in fact no such thing as a homogeneous populist movement. When they convene, UKip MEPs are part of the Europe of Freedom and Democracy group. Then there are the so-called Non-Inscrits – MEPs not attached to any of the recognised party groupings in the European Parliament. These include members of Austria’s Freedom party as well as the Dutch Freedom party.
So what does this motley crew have in common aside from varying degrees of hostility to immigrants? The answer is a growing revulsion against European federalism. At a time when the establishment is struggling to increase the powers of Brussels, a big win for these groups would indeed be serious.
The European Parliament long ago ceased to be a talking shop. It is now effectively Europe’s House of Representatives, sharing legislative power with the European Council. It elects the president of the commission, vets commission nominees and has the power to force their resignation. A populist parliament would sound the death knell for “ever closer union”.
Yet detailed country-by-country research by my colleague Pierpaolo Barbieri suggests that the populists will fall far short of such a victory. The elections will be a toss-up between the centre-left Progressive Alliance of Socialists and Democrats (S & D) and the centre-right European People’s party (EPP). The S & D will claim victory because the EPP will probably lose seats, as will the Alliance of Liberals and Democrats for Europe. True, Non-Inscrits will probably win about 90 seats – nearly three times the 32 seats they won in 2009 – but the EFD will remain stuck at around 30. A total share of approximately 16 per cent of the total of 751 seats hardly represents a populist landslide.
Crucially, there is very little chance that these disparate elements will be able to act in concert. The Economist recently had fun depicting Mr Farage, Ms Le Pen and the Dutch maverick Geert Wilders in a common populist teapot. In reality, Ukip loses votes when it is associated with Ms Le Pen. In organisational terms, European populism is more like the Mad Hatter’s Tea Party than the Boston tea Party.
The populists’ real breakthrough will be in France – where the Front National outperformed in recent municipal elections. Ms Le Pen has political skills far superior to her father, who had all the subtlety of Obelix. Yet this is a national phenomenon. It will matter only if it makes Ms Le Pen look like a credible candidate for France’s next presidential elections.
Moreover, it is only in France that populists have a real chance of coming first. In the UK, where Labour seems likely to win, the Tories may yet edge ahead of UKip as the British economy recovers far faster than the Keynesian Cassandras had anticipated. In Germany the eurosceptic party Alternative for Germany is growing, but it is still polling only 6.5 per cent. It will probably win about six seats, compared with 38 for the CDU/ CSU of Angela Merkel, chancellor.
And for every France there is a Spain. As Alfredo Pérez Rubalcaba, opposition leader, said in Madrid last month: “Spain will not send a single anti-European deputy to Brussels”. The same applies to Portugal which, despite the hardships of the crisis, remains staunchly pro-European. In Greece, the leadership of the leftist Syriza – which so worried the commentariat – is rapidly moving into the European mainstream. And in Italy, Matteo Renzi’s reinvigorated Democratic party will come in comfortably ahead of Beppe Grillo’s Five Star Movement.
There is also a backlash against the populist backlash: new pro-European parties are polling well in Austria (the Neos) and Greece (To Potami). These newcomers appeal to younger voters, for whom populism seems old and crass.
The financial crisis was bound to have political consequences. Yet the striking thing about Europe’s populists is not how well they are doing, but how badly. A hundred years ago the ultimate beneficiaries of a deflationary downturn were Social Democrats, not populists. The same looks to be likely today.
Since former Federal Reserve Chairman Ben Bernanke uttered the word "taper" in June 2013, emerging-market stocks and currencies have taken a beating. It is not clear why talk of (thus far) modest reductions in the Fed's large-scale asset-purchase program should have had such big repercussions outside the United States. The best economic explanation is that capital has been flowing out of emerging markets in anticipation of future rises in U.S. interest rates, of which the taper is a harbinger. While plausible, that cannot be the whole story.
For it is not only U.S. monetary policy that is being tapered. Even more significant is the "geopolitical taper." By this I mean the fundamental shift we are witnessing in the national-security strategy of the U.S.—and like the Fed's tapering, this one also means big repercussions for the world. To see the geopolitical taper at work, consider President Obama's comment Wednesday on the horrific killings of protesters in the Ukrainian capital, Kiev. The president said: "There will be consequences if people step over the line."
No one took that warning seriously—Ukrainian government snipers kept on killing people in Independence Square regardless. The world remembers the red line that Mr. Obama once drew over the use of chemical weapons in Syria . . . and then ignored once the line had been crossed. The compromise deal reached on Friday in Ukraine calling for early elections and a coalition government may or may not spell the end of the crisis. In any case, the negotiations were conducted without concern for Mr. Obama.
The origins of America's geopolitical taper as a strategy can be traced to the confused foreign-policy decisions of the president's first term. The easy part to understand was that Mr. Obama wanted out of Iraq and to leave behind the minimum of U.S. commitments. Less easy to understand was his policy in Afghanistan. After an internal administration struggle, the result in 2009 was a classic bureaucratic compromise: There was a "surge" of additional troops, accompanied by a commitment to begin withdrawing before the last of these troops had even arrived.
Having passively watched when the Iranian people rose up against their theocratic rulers beginning in 2009, the president was caught off balance by the misnamed "Arab Spring." The vague blandishments of his Cairo speech that year offered no hint of how he would respond when crowds thronged Tahrir Square in 2011 calling for the ouster of a longtime U.S. ally, the Egyptian dictator Hosni Mubarak.
Mr. Obama backed the government led by Mohammed Morsi,after the Muslim Brotherhood won the 2012 elections. Then the president backed the military coup against Mr. Morsi last year. On Libya, Mr. Obama took a back seat in an international effort to oust Moammar Gadhafi in 2011, but was apparently not in the vehicle at all when the American mission at Benghazi came under fatal attack in 2012.
Syria has been one of the great fiascos of post-World War II American foreign policy. When President Obama might have intervened effectively, he hesitated. When he did intervene, it was ineffectual. The Free Syrian Army of rebels fighting against the regime of Bashar Assad has not been given sufficient assistance to hold together, much less to defeat the forces loyal to Assad. The president's non-threat to launch airstrikes—ifCongress agreed—handed the initiative to Russia. Last year's Russian-brokered agreement to get Assad to hand over his chemical weapons is being honored only in the breach, as Secretary of State John Kerry admitted last week.
The result of this U.S. inaction is a disaster. At a minimum, 130,000 Syrian civilians have been killed and nine million driven from their homes by forces loyal to the tyrant. At least 11,000 people have been tortured to death. Hundreds of thousands are besieged, their supplies of food and medicine cut off, as bombs and shells rain down.
Worse, the Syrian civil war has escalated into a sectarian proxy war between Sunni and Shiite Muslims, with jihadist groups such as the Islamic State of Iraq and Syria and the Nusra Front fighting against Assad, while the Shiite Hezbollah and the Iranian Quds Force fight for him. Meanwhile, a flood of refugees from Syria and the free movement of militants is helping to destabilize neighboring states like Lebanon, Jordan and Iraq. The situation in Iraq is especially dire. Violence is escalating, especially in Anbar province. According to Iraq Body Count, a British-based nongovernmental organization, 9,475 Iraqi civilians were killed in 2013, compared with 10,130 in 2008.
The scale of the strategic U.S. failure is best seen in the statistics for total fatalities in the region the Bush administration called the "Greater Middle East"—essentially the swath of mainly Muslim countries stretching from Morocco to Pakistan. In 2013, according to the International Institute of Strategic Studies, more than 75,000 people died as a result of armed conflict in this region or as a result of terrorism originating there, the highest number since the IISS Armed Conflict database began in 1998. Back then, the Greater Middle East accounted for 38% of conflict-related deaths in the world; last year it was 78%.
Mr. Obama's supporters like nothing better than to portray him as the peacemaker to George W. Bush's warmonger. But it is now almost certain that more people have died violent deaths in the Greater Middle East during this presidency than during the last one.
In a January interview with the New Yorker magazine, the president said something truly stunning. "I don't really even need George Kennan right now," he asserted, referring to the late American diplomat and historian whose insights informed the foreign policy of presidents from Franklin Roosevelt on. Yet what Mr. Obama went on to say about his self-assembled strategy for the Middle East makes it clear that a George Kennan is exactly what he needs: someone with the regional expertise and experience to craft a credible strategy for the U.S., as Kennan did when he proposed the "containment" of the Soviet Union in the late 1940s.
So what exactly is the president's strategy? "It would be profoundly in the interest of citizens throughout the region if Sunnis and Shiites weren't intent on killing each other," the president explained in the New Yorker. "And although it would not solve the entire problem, if we were able to get Iran to operate in a responsible fashion . . . you could see an equilibrium developing between Sunni, or predominantly Sunni, Gulf states and Iran."
Moreover, he continued, if only "the Palestinian issue" could be "unwound," then another "new equilibrium" could be created, allowing Israel to "enter into even an informal alliance with at least normalized diplomatic relations" with the Sunni states. The president has evidently been reading up about international relations and has reached the chapter on the "balance of power." The trouble with his analysis is that it does not explain why any of the interested parties should sign up for his balancing act.
As Nixon-era Secretary of State Henry Kissinger argued more than half a century ago in his book "A World Restored," balance is not a naturally occurring phenomenon. "The balance of power only limits the scope of aggression but does not prevent it," Dr. Kissinger wrote. "The balance of power is the classic expression of the lesson of history that no order is safe without physical safeguards against aggression."
What that implied in the 19th century was that Britain was the "balancer"—the superpower that retained the option to intervene in Europe to preserve balance. The problem with the current U.S. geopolitical taper is that President Obama is not willing to play that role in the Middle East today. In his ignominious call to inaction on Syria in September, he explicitly said it: "America is not the world's policeman."
But balance without an enforcer is almost inconceivable. Iran remains a revolutionary power; it has no serious intention of giving up its nuclear-arms program; the talks in Vienna are a sham. Both sides in the escalating regional "Clash of Sects"—Shiite and Sunni—have an incentive to increase their aggression because they see hegemony in a post-American Middle East as an attainable goal.
The geopolitical taper is a multifaceted phenomenon. For domestic political as well as fiscal reasons, this administration is presiding over deep cuts in military spending. No doubt the Pentagon's budget is in many respects bloated. But, as Philip Zelikow has recently argued, the cuts are taking place without any clear agreement on what the country's future military needs are.
Thus far, the U.S. "pivot" from the Middle East to the Asia Pacific region, announced in 2012, is the nearest this administration has come to a grand strategy. But such a shift of resources makes no sense if it leaves the former region ablaze and merely adds to tension in the latter. A serious strategy would surely make some attempt to establish linkage between the Far East and the Middle East. It is the Chinese, not the Americans, who are becoming increasingly dependent on Middle Eastern oil. Yet all the pivot achieved was to arouse suspicion in Beijing that some kind of "containment" of China is being contemplated.
Maybe, on reflection, it is not a Kennan that Mr. Obama needs, but a Kissinger. "The attainment of peace is not as easy as the desire for it," Dr. Kissinger once observed. "Those ages which in retrospect seem most peaceful were least in search of peace. Those whose quest for it seems unending appear least able to achieve tranquillity. Whenever peace—conceived as the avoidance of war—has been the primary objective . . . the international system has been at the mercy of [its] most ruthless member."
Those are words this president, at a time when there is much ruthlessness abroad in the world, would do well to ponder.
Mr. Ferguson is a history professor at Harvard and a senior fellow at Stanford University's Hoover Institution. His most recent book is "The Great Degeneration" (Penguin Press, 2013).