The False Prophecy of Hyperconnection

 How to Survive the Networked Age

It is a truth universally acknowledged that the world is connected [3] as never before. Once upon a time, it was believed that there were six degrees of separation between each individual and any other person on the planet (including Kevin Bacon). For Facebook users today, the average degree of separation is 3.57. But perhaps that is not entirely a good thing. As Evan Williams, one of the founders of Twitter, told The New York Times in May 2017 [4], “I thought once everybody could speak freely and exchange information and ideas, the world is automatically going to be a better place. I was wrong about that.”

Speaking at Harvard’s commencement [5] that same month, Facebook’s chair and CEO, Mark Zuckerberg, looked back on his undergraduate ambition to “connect the whole world.” “This idea was so clear to us,” he recalled, “that all people want to connect. . . . My hope was never to build a company, but to make an impact.” Zuckerberg has certainly done that, but it is doubtful that it was the impact he dreamed of in his dorm room. In his address, Zuckerberg identified a series of challenges facing his generation, among them: “tens of millions of jobs [being] replaced by automation,” inequality (“there is something wrong with our system when I can leave here and make billions of dollars in ten years while millions of students can’t afford to pay off their loans”), and “the forces of authoritarianism, isolationism, and nationalism,” which oppose “the flow of knowledge, trade, and immigration.” What he omitted to mention was the substantial contributions that his company and its peers in Silicon Valley have made to all three of these problems.

No businesses in the world are working harder to eliminate jobs such as driving a truck than the technology giants of California. No individuals exemplify the spectacular growth of the wealth of the top 0.01 percent of earners better than the masters of Silicon Valley. And no company did more— albeit unintentionally—to help the populists win their political victories in the United Kingdom and the United States in 2016 than Facebook. For without Facebook’s treasure house of data about its users, it would surely have been impossible for the relatively low-budget Brexit and Trump campaigns to have succeeded. The company unwittingly played a key role in last year’s epidemic of fake news stories.

Zuckerberg is by no means the only believer in one networked world: a “global community,” in his phrase. Ever since 1996, when the Grateful Dead lyricist turned cyber-activist John Perry Barlow released his “Declaration of the Independence of Cyberspace,” in which he asked the “Governments of the Industrial World, you weary giants of flesh and steel,” to “leave us alone,” there has been a veritable parade of cheerleaders for universal connectivity. “Current network technology . . . truly favors the citizens,” wrote Google’s Eric Schmidt and Jared Cohen in 2013. “Never before have so many people been connected through an instantly responsive network.” This, they argued, would have truly “game-changing” implications for politics everywhere. The early phase of the Arab Spring [6] seemed to vindicate their optimistic analysis; the subsequent descent of Syria and Libya into civil war, not so much.

Like John Lennon’s “Imagine,” utopian visions of a networked world are intuitively appealing. In his Harvard speech, for example, Zuckerberg contended that “the great arc of human history bends towards people coming together in ever-greater numbers—from tribes to cities to nations—to achieve things we couldn’t on our own.” Yet this vision, of a single global community as the pot of gold at the end of the arc of history, is at odds with everything we know about how social networks work. Far from being new, networks have always been ubiquitous in the natural world and in the social life of humans. The only thing new about today’s social networks is that they are the biggest and fastest ever, connecting billions of people in seconds. Long before the founding of Facebook, however, scholars had already conducted a great deal of research into how smaller and slower social networks operate. What they found gives little ground for optimism about how a fully networked world would function.


Six fundamental insights can help those without expertise in network theory to think more clearly about the likely political and geopolitical impacts of giant, high-speed social networks [7]. The first concerns the pattern of connections within networks. Since the work of the eighteenth-century Swiss scholar Leonhard Euler, mathematicians have conceived of networks as graphs of nodes connected together by links or, in the parlance of network theory, “edges.” Individuals in a social network are simply nodes connected by the edges we call “relationships.” Not all nodes or edges in a social network are equal, however, because few social networks resemble a simple lattice, in which each node has the same number of edges as all the rest. Typically, certain nodes and edges are more important than others. For example, some nodes have a higher “degree,” meaning that they have more edges, and some have higher “betweenness centrality,” meaning that they act as the busy junctions through which a lot of network traffic has to pass. Put differently, a few crucial edges can act as bridges, connecting together different clusters of nodes that would otherwise not be able to communicate. Even so, there will nearly always be “network isolates”—individual nodes that are not connected to the main components of the network.

At the same time, birds of a feather flock together. Because of the phenomenon known as “homophily,” or attraction to similarity, social networks tend to form clusters of nodes with similar properties or attitudes. The result, as researchers found when they studied American high schools, can be self-segregation along racial lines or other forms of polarization. The recent division of the American public sphere into two echo chambers, each deaf to the other’s arguments, is a perfect illustration.

A common error of much popular writing about social networks is to draw a distinction between networks and hierarchies. This is a false dichotomy. A hierarchy is simply a special kind of network with restricted numbers of horizontal edges, enabling a single ruling node to maintain an exceptionally high degree and exceptionally high betweenness centrality. The essence of any autocracy is that nodes further down the organizational chart cannot communicate with one another, much less organize, without going through the central node. The correct distinction is between hierarchical networks and distributed ones.

For most of history, hierarchical networks dominated distributed networks. In relatively small communities with relatively frequent conflicts, centralized leadership enjoyed a big advantage, because warfare is generally easier with centralized command and control. Moreover, in most agricultural societies, literacy was the prerogative of a small elite, so that only a few nodes were connected by the written word. But then, more than 500 years ago, came the printing press. It empowered Martin Luther’s heresy and gave birth to a new network.

Luther thought the result of his movement to reform the Roman Catholic Church [8] would be what came to be called “the priesthood of all believers,” the sixteenth-century equivalent of Zuckerberg’s “global community.” In practice, the Protestant Reformation produced more than a century of bloody religious conflict. This was because new doctrines such as Luther’s, and later John Calvin’s, did not spread evenly through European populations. Although Protestantism swiftly acquired the structure of a network, homophily led to polarization, with those parts of Europe that most closely resembled urban Germany in terms of population density and literacy embracing the new religion and the more rural regions reacting against it, embracing the papal Counter-Reformation. Yet it proved impossible for Catholic rulers to destroy Protestant networks, even with mass executions, just as it proved impossible to wholly stamp out Catholicism in states that adopted the Reformation.


The second insight is that weak ties are strong. As the Stanford sociologist Mark Granovetter demonstrated in a seminal 1973 article [9], acquaintances are the bridges between clusters of friends, and it is those weak ties that make the world seem small. In the famous experiment with chain letters that the psychologist Stanley Milgram published in 1967, there turned out to be just seven degrees of separation between a widowed clerk in Omaha, Nebraska, and a Boston stockbroker she did not know.

Like the Reformation, the scientific revolution and the Enlightenment were network-driven phenomena, yet they spread faster and farther. This reflected the importance of acquaintances in correspondence networks such as Voltaire’s and Benjamin Franklin’s, communities that might otherwise have remained subdivided into national clusters. It also reflected the way that new social organizations—notably, Freemasonry—increased the connectedness of like-minded men, despite established divisions of social status. It is no accident that so many key figures in the American Revolution, from George Washington to Paul Revere, were also Freemasons.


Third, the structure of a network determines its virality. As recent work by the social scientists Nicholas Christakis and James Fowler has shown, the contagiousness of a disease or an idea depends as much on a social network’s structure as on the inherent properties of the virus or meme. The history of the late eighteenth century illustrates that point well. The ideas that inspired both the American Revolution and the French Revolution were essentially the same, and both were transmitted through the networks of correspondence, publication, and sociability. But the network structures of Colonial America and ancien régime France were profoundly different (for example, the former lacked a large, illiterate peasantry). Whereas one revolution produced a relatively peaceful, decentralized democracy, albeit one committed to a transitional period of slavery, the other established a violent and at times anarchic republic that soon followed the ancient Roman path to tyranny and empire.

Hierarchical order was not easily restored after the fall of Napoleonic France in 1814. It took the great powers that dominated the Congress of Vienna, which concluded the next year, to reestablish monarchical governance in Europe and then export it to most of the world in the form of colonial empires. What made the spread of imperialism possible was the fact that the technologies of the industrial age—railways, steamships, and telegraphs—favored the emergence of “superhubs,” with London as the most important node. In other words, the structure of networks had changed, because the new technologies lent themselves to central control in ways that had not been true of the printing press or the postal service. The first age of globalization, between 1815 and 1914, was a time of train controllers and timetables.


Fourth, many networks are complex adaptive systems that are constantly shifting shape. Such was the case even for the most hierarchical states of all time, the totalitarian empires presided over by Adolf Hitler, Joseph Stalin, and Mao Zedong. With his iron grip on the party bureaucracy and his ability to tap the Soviet telephone system, Stalin [10] was perhaps the supreme autocrat, a man so powerful that he could effectively outlaw all unofficial social networks, even persecuting the poet Anna Akhmatova for one illicit night of conversation with the philosopher Isaiah Berlin. In the 1950s, Christian democratic Europe and corporate America were hierarchical, too—just look at the midcentury organizational charts for General Motors—but not to anything like the same extent. A network-based reform campaign such as the civil rights movement was unthinkable in the Soviet Union. Those who campaigned against racial segregation in the American South were harassed, but efforts to suppress them ultimately failed.

The middle of the twentieth century was a time that lent itself to hierarchical governance. Beginning in the 1970s, however, that began to change. It is tempting to assume that credit goes to technology. On closer inspection, however, Silicon Valley was a consequence, rather than a cause, of weakening central control. The Internet [11] was invented in the United States and not in the Soviet Union precisely because the U.S. Defense Department, preoccupied with a disastrous war in Vietnam, essentially let the computer scientists in California build whatever system for computer-tocomputer communication they liked. That did not happen in the Soviet case, where an analogous project, directed by the Institute of Cybernetics, in Kiev, was simply shut down by the Ministry of Finance.

The 1970s and 1980s saw two great phase transitions within the superpowers that waged the Cold War, marking the dawn of the second networked age. In the United States, the resignation of President Richard Nixon seemed to represent a major victory for the free press and representative government over the would-be imperial presidency. Yet the Watergate scandal, the defeat in Vietnam, and the social and economic crises of the mid-1970s did not escalate into a full breakdown of the system. Indeed, the presidency of Ronald Reagan restored the prestige of the executive branch with remarkable ease. By contrast, the collapse of the Soviet empire in Eastern Europe was brought about by networks of anticommunist dissent that had almost no technologically advanced means of communication. Indeed, even printing was denied to them, hence the underground literature known as “samizdat.” The Polish case illustrates the role of networks well: the trade union Solidarity succeeded only because it was itself embedded in a heterogeneous web of opposition groups.


The fifth insight is that networks interact with one another, and it takes a network to defeat a network. When networks link up with other networks, innovation often results. But networks can also attack one another. A good example is the way the Cambridge University intellectual society known as the Apostles came under attack by the KGB in the 1930s. In one of the most successful intelligence operations of the twentieth century, the Soviets managed to recruit several spies from the Apostles’ ranks, yielding immense numbers of high-level British and Allied documents during and after World War II.

The case illustrates one of the core weakness of distributed networks. It was not only the Cambridge intelligentsia that the Soviets penetrated; they also hacked into the entire old-boy network that ran the British government in the twentieth century. They were able to do so precisely because the unspoken assumptions and unwritten rules of the British establishment caused telltale evidence of treachery to be overlooked or explained away. Unlike hierarchies, which tend to be paranoid about security, distributed networks are generally bad at self-defense.

Likewise, the 9/11 attacks were carried out by one network on another network: al Qaeda against the U.S. financial and political system. Yet it was not the immediate damage of the terrorist attacks that inflicted the real cost on the United States so much as the unintended consequences of the national security state’s response. Writing in the Los Angeles Times in August 2002, before it was even clear that Iraq was to be invaded, the political scientist John Arquilla [12] presciently pointed out the flaws in such an approach. “In a netwar, like the one we find ourselves in now, strategic bombing means little, and most networks don’t rely on one—or even several—great leaders to sustain and guide them,” he wrote. Faulting the George W. Bush administration for creating the Department of Homeland Security, he argued, “A hierarchy is a clumsy tool to use against a nimble network: It takes networks to fight networks, much as in previous wars it has taken tanks to fight tanks.”

It took four painful years after the invasion of Iraq to learn this lesson. Looking back at the decisive phase of the U.S. troop surge in 2007, U.S. General Stanley McChrystal summed up what had been learned. In order to take down the terrorist network of Abu Musab al-Zarqawi, McChrystal wrote, his task force “had to replicate its dispersion, flexibility, and speed.” He continued: “Over time, ‘It takes a network to defeat a network’ became a mantra across the command and an eight-word summary of our core operational concept.”


The sixth insight is that networks are profoundly inegalitarian. One enduring puzzle is why the 2008 financial crisis [13] inflicted larger economic losses on the United States and its allies than did the terrorist attacks of 2001, even though no one plotted the financial crisis with malice aforethought. (Plausible estimates for the losses that the financial crisis inflicted on the United States alone range from $5.7 trillion to $13 trillion, whereas the largest estimate for the cost of the war on terrorism stands at $4 trillion.) The explanation lies in the dramatic alterations in the world’s financial structure that followed the introduction of information technology to banking. The financial system had grown so complex that it tended to amplify cyclical fluctuations. It was not just that financial centers had become more interconnected, and with higher-speed connections; it was that many institutions were poorly diversified and inadequately insured. What the U.S. Treasury, the Federal Reserve, and other regulatory authorities failed to grasp when they declined to bail out Lehman Brothers in 2008 was that although its chief executive, Richard Fuld, was something of a network isolate on Wall Street— unloved by his peers (including the U.S. treasury secretary, Henry Paulson, formerly the head of Goldman Sachs)—the bank itself was a crucial node in a dangerously fragile international financial network. Economists untrained in network theory woefully underestimated the impact of letting Lehman Brothers fail.

In the period after the financial crisis, everyone else caught up with the financial world: the rest of society got networked in the ways that, ten years ago, only bankers had been. This change was supposed to usher in a brave new world of global community, with every citizen also a netizen, equipped by technology to speak truth to power and hold it to account. Yet once again, the lessons of network theory had been overlooked, for giant social networks are not in the least bit egalitarian. To be precise, they have many more nodes with a very large number of edges and many more with very few edges than would be the case in a randomly generated network. This is because, as social networks expand, the nodes gain new edges in proportion to the number that they already have. The phenomenon is a version of what the sociologist Robert Merton called “the Matthew effect,” after the Gospel of Matthew 25:29: “For unto every one that hath shall be given, and he shall have abundance: but from him that hath not shall be taken away even that which he hath.” In science, for example, success breeds success: to the scientist who already has citations and prizes, more shall be given. But the trend is perhaps most visible in Silicon Valley. In 2001, the software developer Eric Raymond confidently predicted that the open-source movement would win out within three to five years. He was to be disappointed. The open-source dream died with the rise of monopolies and duopolies that successfully fended off government regulation that might have inhibited their growth. Apple and Microsoft established something close to a software duopoly. Beginning as a bookseller, Amazon came to dominate online retail. Google even more swiftly established a near monopoly on search. And of course, Facebook won the race to dominate social media.

At the time of this writing, Facebook has 1.17 billion active daily users. Yet the company’s ownership is highly concentrated. Zuckerberg himself owns just over 28 percent of the company, making him one of the ten richest people in the world. That group also includes Bill Gates, Jeff Bezos, Carlos Slim, Larry Ellison, and Michael Bloomberg, whose fortunes all derive in some way or another from information technology. Thanks to the rich-get-richer [14] effect, the returns to their businesses do not diminish. Vast cash reserves allow them to acquire any potential competitor.

At Harvard, Zuckerberg envisioned “a world where everyone has a sense of purpose: by taking on big meaningful projects together, by redefining equality so everyone has the freedom to pursue purpose, and by building community across the world.” Yet Zuckerberg personifies what economists call “the economics of superstars,” whereby the top talents in a field earn much, much more than the runners-up. And paradoxically, most of the remedies for inequality that Zuckerberg mentioned in his address—a universal basic income, affordable childcare, better health care, and continuous education—are viable only as national policies delivered by the twentieth-century welfare state.


The global impact of the Internet has few analogues in history better than the impact of printing on sixteenth-century Europe. The personal computer and the smartphone have empowered the individual as much as the pamphlet and the book did in Luther’s time. Indeed, the trajectories for the production and price of personal computers in the United States between 1977 and 2004 look remarkably similar to the trajectories for the production and price of printed books in England from 1490 to 1630.

But there are some major differences between the current networked age and the era that followed the advent of European printing. First, and most obvious, today’s networking revolution is much faster and more geographically extensive than the wave of revolutions unleashed by the German printing press.

Second, the distributional consequences of the current revolution are quite different. Early modern Europe was not an ideal place to enforce intellectual property rights, which in those days existed only when technologies could be secretively monopolized by a guild. The printing press created no billionaires: Johannes Gutenberg was no Gates (by 1456, in fact, he was effectively bankrupt). Moreover, only a subset of the media made possible by the printing press—newspapers and magazines—sought to make money from advertising, whereas all the most important network platforms made possible by the Internet do. That is where the billions of dollars come from. More than in the past, there are now two distinct kinds of people in the world: those who own and run the networks and those who merely use them.

Third, the printing press had the effect of disrupting religious life in Western Christendom before it disrupted anything else. By contrast, the Internet began by disrupting commerce; only very recently did it begin to disrupt politics, and it has truly disrupted just one religion, Islam, by empowering the most extreme version of Sunni fundamentalism.

Nevertheless, there are some clear similarities between our time and the revolutionary period that followed the advent of printing. For one thing, just as the printing press did, modern information technology is transforming not only the market—for example, facilitating short-term rentals of apartments—but also the public sphere. Never before have so many people been connected together in an instantly responsive network through which memes can spread faster than natural viruses. But the notion that taking the whole world online would create a utopia of netizens, all equal in cyberspace, was always a fantasy—as much a delusion as Luther’s vision of a “priesthood of all believers.” The reality is that the global network has become a transmission mechanism for all kinds of manias and panics, just as the combination of printing and literacy temporarily increased the prevalence of millenarian sects and witch crazes. The cruelties of the Islamic State, or ISIS, seem less idiosyncratic when compared with those of some governments and sects in the sixteenth and seventeenth centuries. The contamination of the public sphere with fake news today is less surprising when one remembers that the printing press disseminated books about magic as well as books about science.

Moreover, as in the period during and after the Reformation, the current era is witnessing the erosion of territorial sovereignty. In the sixteenth and seventeenth centuries, Europe was plunged into a series of religious wars because the principle formulated at the 1555 Peace of Augsburg— cuius regio, eius religio (to each realm, its ruler’s religion)—was being honored mainly in the breach. In the twenty-first century, there is a similar phenomenon of escalating intervention in the domestic affairs of sovereign states. Consider the Russian attempt to influence the 2016 U.S. presidential election. Moscow’s hackers and trolls pose a threat to American democracy not unlike the one that Jesuit priests once posed to the English Reformation.

For the scholar Anne-Marie Slaughter, the “hyper-networked world” is, on balance, a benign place. The United States “will gradually find the golden mean of network power,” she wrote in these pages last year, if its leaders figure out how to operate not just on the traditional “chessboard” of interstate diplomacy but also in the new “web” of networks, exploiting the advantages of the latter (such as transparency, adaptability, and scalability). Others are less confident. In The Seventh Sense, Joshua Cooper Ramo argues for the erection of real and virtual “gates” to shut out the Russians, the online criminals, the teenage Internet vandals, and other malefactors. Yet Ramo himself quotes the three rules of computer security devised by the National Security Agency cryptographer Robert Morris: “RULE ONE: Do not own a computer. RULE TWO: Do not power it on. RULE THREE: Do not use it.” If everyone continues to ignore those imperatives—and especially political leaders, most of whom have not even enabled two-factor authentication for their e-mail accounts—even the most sophisticated gates will be useless.

Those who wish to understand the political and geopolitical implications of today’s interconnectedness need to pay more heed to the major insights of network theory than they have hitherto. If they did, they would understand that networks are not as benign as advertised. The techno-utopians who conjure up dreams of a global community have every reason to dispense their Kool-Aid to the users whose data they so expertly mine. The unregulated oligopoly that runs Silicon Valley has done very well indeed from networking the world [15]. The rest of us—the mere users of the networks they own—should treat their messianic visions with the skepticism they deserve.

Copyright © 2017 by the Council on Foreign Relations, Inc. All rights reserved. To request permission to distribute or reprint this article, please fill out and submit a Permissions Request Form. If you plan to use this article in a coursepack or academic website, visit Copyright Clearance Center to clear permission.

Publication Name
6 Article Results