The more machines learn, the less we shall grasp

 The rise of artificial intelligence threatens to make our world more mystifying

Are we living through the remystification of the world? Much that goes on around us is baffling these days. Financial market movements, for example, seem increasingly mysterious. Why, after close to a decade of sustained recovery from the nadir of early 2009, did global stock markets sell off so sharply this month?

We who claim expertise in these matters can tell stories about what just happened, but the nasty feeling persists that we haven’t a clue. Twelve weeks ago I warned that “financial red lights” were flashing again. Was I prescient or just lucky? My argument was that, as central banks raised interest rates and wound down the asset purchasing programmes known as quantitative easing, there was bound to be downward pressure on stock markets. I also argued that, for demographic and other reasons, the end of the prolonged bond bull market was nigh.

I still like that story, as it’s based on familiar patterns from financial history, even if heeding my words would have made you no money. (All that has happened is that, after a final two-month surge, markets have reverted to where they were when I wrote my column. Listening to me would have spared you a round trip.)

Yet the market gyrations of the past two weeks have elicited a host of more exotic explanations. Each stock market correction has its villains, the product or people whom everybody else can blame for their losses. This time around the bad guys included an exchange traded note, XIV, which enabled investors to bet on continuing low volatility, and large quantitative hedge funds that employ “risk parity” and “trend following” strategies. For people at dinner parties who wanted to avoid explaining these rather complicated things, there was a simpler formulation: it was all the fault of “the machines” or “the algos” (as in algorithms).

Nobody doubts that computers play a far larger role in financial markets today than ever before. It seems reasonable to assume that automated transactions by index tracking funds, not to mention high-frequency trading by quant funds, tend to amplify market movements. Yet there is no need to invoke these novelties to explain the return of normal financial volatility. There is, to my mind, a superstitious quality to the phrase: “It was the machines.”

For most of human history, superstition was the dominant mode of explanation. If the crops failed, it was the wrath of the gods. If a child died, it was the work of evil spirits. As the Oxford historian Keith Thomas showed in his marvellous book Religion and the Decline of Magic, people in England blamed misfortune on witches until late in the 17th century.

The great German sociologist Max Weber argued that modernity was about the advance of rationalism and the retreat of mystery — what he called the “disenchantment [Entzauberung] of the world”. People said goodbye to magic and entered an “iron cage” of rationality and bureaucracy. Weber borrowed the word Entzauberung from the poet and playwright Friedrich Schiller. I have always thought “demystification” a more precise, if clumsier, translation. The point is that this process may be reversible.

“The machines” are getting smarter every day. Computer scientists in America and China vie with one another to achieve the breakthroughs in artificial intelligence (AI) that will not only make driverless vehicles the dominant transport system, but also revolutionise almost every activity that depends on human pattern recognition.

Machine learning is already superior to human learning in numerous domains. The best human players of chess and the Chinese game Go no longer stand a chance against the computers of the pioneering British company DeepMind, which Google acquired in 2014. Even that least cerebral of games, football, is being revolutionised by AI. The brilliant Hungarian-American physicist Albert-Laszlo Barabasi told me last week that computers in his laboratory at Northeastern University in Massachusetts already do a better job of assessing the performance of football players than human experts.

As they try to understand the implications of the rapid advance of AI, people tend to think in terms of science fiction. The usual reference is to 2001: A Space Odyssey, the 1968 Stanley Kubrick film in which the “foolproof and incapable of error” computer HAL 9000 attempts to kill the entire crew of a spaceship.

But perhaps the right way to think of AI is historical — as a phenomenon that may return humanity to the old world of mystery and magic. As machine learning steadily replaces human judgment, we shall find ourselves as baffled by events as our pre-modern forefathers were. For we shall no more understand the workings of the machines than they understood the vagaries of nature. Already, many of us stand in the same relation to financial “flash crashes” as medieval peasants did to flash floods.

The point, as former Google chairman Eric Schmidt explained to me last year, is that even the best software engineers in Silicon Valley no longer fully understand how their own algorithms work.

At firms such as Nvidia, they program the self-driving cars to teach themselves how to drive. This “deep learning” goes deeper than our paltry human minds can fathom. How exactly is Deep Patient, a system developed at Mount Sinai Hospital in New York, able to predict which patients may succumb to schizophrenia? We don’t really know, and Deep Patient isn’t designed to explain its reasoning.

AI is no longer about getting computers to think like humans, only faster. It is about getting computers to think like a species that had evolved brains much bigger than humans — not like humans at all. The question is: how shall we cope with this remystification of the world? Shall we begin to worship the machines — to propitiate them with prayers, or even sacrifices? Or shall we just lapse into fatalism?

Mankind — or “peoplekind”, as the Canadian prime minister, Justin Trudeau, has renamed us — stands on the threshold of a new era. I would like to believe that the sum of human happiness will be increased by deep learning. Perhaps it may. But I fear that the sum of human understanding may end up being reduced.

Consider a political example. Many British people today wonder why Brexit is going wrong. “Bremorse” is, for the first time, detectable in the polls. Growing numbers of people want to rerun the referendum. The government appears to be sleepwalking towards a transition agreement in which nothing of substance changes except that the UK loses all voting rights in Brussels. All this was more or less predictable two years ago. Yet The Daily Telegraph and the Daily Mail have an alternative explanation: a “secret plot to sabotage Brexit” by the dastardly cosmopolitan financier George Soros.

This kind of explanation also has a history, and not an edifying one. If the remystification of the world means a revival of thinly veiled anti-semitism as well as magical thinking, then I’m staying put in Weber’s iron cage.

Niall Ferguson is the Milbank Family senior fellow at the Hoover Institution, Stanford

The Sunday Times
  • Show All
  • Newsweek/Daily Beast
  • The Washington Post
  • The Australian
  • Daily Mail
  • Huffington Post
  • Vanity Fair
  • The Telegraph
  • Time Magazine
  • Foreign Affairs
  • The Sunday Times
  • London Evening Standard
  • The Spectator
  • The Atlantic
  • The Globe and Mail
  • Politico Magazine
  • The Times Literary Supplement
  • The Wall Street Journal
  • Bloomberg
48 Article Results