My crystal ball missed Brexit but got Donald Trump

 Those who make predictions must keep a tally. So how did I do?

It has been nearly 4½ years since I began writing this column, which works out at roughly 240,000 words altogether. As these will be my last words in these pages, it’s time to look back and take stock. If part of your job is to be a pundit then, as the Pennsylvania University political scientist Philip Tetlock argues in Superforecasting: The Art and Science of Prediction, you need to keep score.

As Tetlock had a dig at me in that book — which was published in 2015, before I began writing for The Sunday Times — this is also a good opportunity to settle a score.

Tetlock was, of course, quite right about most public intellectuals (a term that always makes me think of public conveniences). They seldom hold themselves to account. Nor do they get fired if their predictions are consistently wrong, as long as they are entertaining enough to engage readers.

Since I set up an advisory firm nine years ago, though, my approach has been different — out of necessity, as fund managers differ from newspaper editors in their attitude to predictions. Not only do they notice when you’re wrong, because one or more financial indicators make that clear; they also let you know about it (with grim relish, usually). If you’re wrong too often, it’s goodbye.

So at the beginning of each year we at Greenmantle make predictions about the year ahead, and at the end of the year we see — and tell our clients — how we did. Each December we also rate every predictive statement we have made in the previous 12 months, either “true”, “false” or “not proven”. In recent years, we have also forced ourselves to attach probabilities to our predictions — not easy when so much lies in the realm of uncertainty rather than calculable risk. We have, in short, tried to be superforecasters. And with some success.

Now it’s time to apply the same retrospective scoring to this column. So as to meet my deadline, I’ve picked my first full year at The Sunday Times, which was the annus mirabilis — or horribilis, depending on your politics — beginning on November 1, 2015, the date of my first column.

Three minor themes are worth mentioning. I argued repeatedly that the twin problems of Islamic extremist networks and mass migration from the Muslim world were not likely to go away: “Think of Isis as the Facebook of Islamic extremism” (March 27, 2016). I also began warning, as early as May of that year, that the rise of Silicon Valley’s big tech companies was not an unmitigated boon: “What the state knows is just a fraction of what Facebook knows about you” (May 15). I also noted the dire implications for Labour of the antisemitism of Jeremy Corbyn and his circle (May 1).

But by far the biggest issues of my first year on this page — and subsequent years too — were Britain’s vote to leave the EU and the election of Donald Trump. How did I do?

On Brexit, I was wrong. From the outset, I was a remainer. “The idea that we can . . . separate ourselves from Europe is an illusion,” I wrote on February 21. “For the future of Europe without us would be one of escalating instability.” Impolitely, I called Brexiteers “Angloonies” and “happy morons”. When the remain side lost, I predicted a “stairway to hell”— or at least a recession (June 26). Wrong.

At the end of the year, on December 11, 2016, I made a confession. I had been motivated to back remain more because of “my personal friendship with [David] Cameron and George Osborne” than out of any deep allegiance to the EU. I regretted — and still regret — not urging Cameron to reject “the risible terms that the European leaders offered him back in February on EU migrants’ eligibility for benefits”. That was the moment he should have called their bluff by backing Brexit.

Yet the humiliation of Brexit gave me an advantage over American commentators on the 2016 presidential race. I had moments of doubt, admittedly. I compared Trump to unsuccessful Republican candidates Wendell Willkie (December 13, 2015) and Barry Goldwater (January 31, 2016). On April 3, 2016, I predicted the bursting of the Trump bubble in the Wisconsin primary. Ted Cruz won that, but it didn’t burst the bubble. Far more often, I went against the conventional wisdom that Trump was doomed to lose.

“Trump has the face that fits the ugly mood in America,” was my headline on November 1, 2015. “Trump has both the resources and the incentives to press on. In the current national mood of disaffection with professional politicians, he could seem an attractive alternative to Hillary Clinton . . . The point about Trump is that his appeal is overwhelmingly a matter of style over substance. It is not what he says that a great many white Americans like — it is the way that he says it.”

I was against Trump. I was a signatory of a “never Trump” letter. I repeatedly condemned his “open expressions of racial prejudice and xenophobia”, his isolationism (December 13, 2015) and his fishy bromance with Vladimir Putin (May 8 and October 16, 2016). I regretted that Mike Bloomberg chose not to run (October 23).

But I also saw clearly the strength of his appeal. “Trump is winning,” I wrote on February 28, 2016, “because no other candidate has a more convincing explanation of why so many Republican voters genuinely are worse off today than in 2000 . . . But no one can rule out Democratic defections to Trump when it comes to the crunch on November 8.” On March 6, I imagined Trump winning and running for an unconstitutional third term in 2024. “Trump can beat Hillary Clinton,” I wrote on May 8.

“Can Trump succeed where [Mitt] Romney failed?” I asked on July 21. “Yes . . . many young voters will fail to show up for Clinton. Meanwhile, the white lower class, especially the older cohorts, will turn out for Trump in droves, just as their English counterparts turned out for Brexit.”

The choice between Clinton and Trump was a choice between “snafu” and “fubar”, I wrote on September 18, “but wouldn’t you risk being fubar . . . if it was your only shot at avoiding four more years of snafu?”

“This rage against the global,” I wrote a week later, “is why Trump could win this election. It is why Brexit happened. It is why populists are gaining ground wherever free elections are held.”

I marked my first anniversary at this paper with a column that compared Trump to the Chicago Cubs, the outsiders who had just won the baseball World Series. “He can win,” I wrote, “if there is a differential in turnout between his supporters and [Clinton’s] in the battleground states comparable to the age and ethnicity-based differentials in the UK referendum” (November 6).

Now, dear reader, you are burning to know what I think will happen this November. Bad luck. You will have to seek my superforecast in another publication.

Niall Ferguson is the Milbank Family senior fellow at the Hoover Institution, Stanford, and managing director of Greenmantle

Coronavirus: we should have learnt from Sars, not swine flu

 If H1N1 had been worse, the elderly might not be in such danger today

The word “genocide” — meaning the murder of a tribe or people — was coined in 1944 by Raphael Lemkin, a Polish-Jewish refugee from Nazism, whose family was all but obliterated in the Holocaust. The word “senicide” — meaning the deliberate murder of the elderly — is less well known, though of older provenance. According to the Oxford English Dictionary, it was first used by the Victorian explorer Sir Henry Hamilton Johnston. “The ancient Sardi of Sardinia,” he wrote in 1889, “regarded it as a sacred . . . duty for the young to kill their old relations.”

Lemkin’s word caught on. Not only did the United Nations general assembly unanimously pass a resolution in 1946 condemning genocide; by 1948 it had also approved — again, nem con — a convention on the prevention and punishment of the crime of genocide.

Although America did not ratify that convention until 1985, use of the word grew exponentially from its first publication. (I hesitate to say that it went viral.) Enter “genocide” into Amazon’s search field and you will have more than 10,000 results to trawl through.

Not so “senicide”. There are just two books on that subject on Amazon’s site: The Customary Practice of Senicide. With Special Reference to India by Pyali Chatterjee, and Death Clock Ticking: Senicide, Ageism and Dementia Discrimination in Geriatric Medicine by Itu Taito. The latter has not yet been published. Oh, and there’s a perfectly ghastly song called Senicide by a Californian heavy metal band called Huntress.

There are a few older books that use the word, nearly all in connection with the alleged practices of ancient or obscure tribes (the Padaeans of India, the Votyaks of Russia, the early American Hopi, the Netsilik Inuit of Canada, South Africa’s San people and the Amazonian Bororos). But senicide is so rare a word that Microsoft Word’s spellcheck underlines it in red, itching to auto-correct it to “suicide”.

All that is about to change. If, as seems increasingly likely, a significant number of western countries are going to continue mismanaging the pandemic caused by the virus Sars-CoV-2 — the novel coronavirus that originated in Wuhan, China, in December — then a very large number of old people are going to die before their time.

The statistics are unequivocal. In China, where the epidemic seems for the moment to be under control, the case fatality rate for those under 50 was 0.2%. For those over 60 it was 3.6%, for the over-70s 8% and for the over-80s 14.8%. In Italy — now the country worst affected by Covid-19, the disease the virus carries — the fatality rate for the over-70s thus far has been 11.8%, for the over-80s 18.8% and for the over-90s 21.6%.

It is, in one respect, a blessing Covid-19 seems to be “ageist”. Most pandemics are not so merciful towards children. In America, for example, the 1957-8 influenza pandemic killed the under-5s at an even higher rate than it killed the over-64s.

It is also true that there have never, in all of history, been so many old folk. Today more than a quarter of Japan’s population are aged 65 or older. In 1960, the share was just 5.6%. In the European Union, the share has doubled from 10% to 20%. The world as a whole has gone from 5% elderly to 9%.

And it is true, too, that doctors in an overwhelmed hospital with insufficient intensive care units are correct, from a utilitarian perspective, to give priority to the young over those nearing the end of their natural lives. I do not blame the Italian doctors who have been practising this form of triage.

Yet when this pandemic has run its course — when we have achieved “herd immunity” as a species and when vaccines and therapies have been devised — there will have been a lot more funerals for elderly Italians and, very probably, Americans and Britons than for Taiwanese or South Koreans.

And the reason for this discrepancy will not be bad luck. The reason will be that east Asian countries drew the right conclusions from the searing experiences of Sars in 2003, while most western countries drew the wrong conclusions from their relatively mild encounter with H1N1, commonly known as swine flu, in 2009.

That Covid-19 was both highly contagious (because it is easy to carry and transmit by asymptomatic individuals) and much more deadly than seasonal flu was already obvious as early as January 26, when I first wrote about the coming pandemic in this column. And yet numerous governments — including the American and the British ones — dithered for the better part of two months.

It was not only Donald Trump’s irresponsible nonchalance that did the damage. There were also failures by the very organisations that were supposed to prepare our countries for a threat such as this. In America there has been a scandalous insufficiency of testing kits, so that, as recently as last week, the country was still lagging behind Belarus and Russia in terms of tests per capita.

In the UK, policy was initially based on the notion that the country would be better off aiming for early herd immunity than trying to suppress the spread of the new disease — until epidemiologists such as my near namesake Neil Ferguson (whom we must all wish a swift recovery, as he developed Covid-19-like symptoms last week) pointed out the likely disastrous consequences.

Because of these blunders, America and the UK have moved far too slowly to adopt the combination of mass testing, enforced social distancing and contact tracing that has successfully contained the virus’s spread in east Asian countries. There is a reason the death toll in South Korea is just over 100, while in Italy it is almost 5,000

How many people will die in the end? We do not know. In America, if Italian conditions are replicated in New York and California, we could see between half a million and million deaths by the end of this year. I have seen estimates as high as 1.7 million, even 2.2 million. The other Ferguson’s worst-case scenario for Britain was 510,000 deaths. But the key point is that most of the victims will be old. And most of the deaths could have been avoided with better preparation and earlier action.

The 19th-century Russian historian Nikolai Karamzin defined senicide as “the right of children to murder parents overburdened by senium [old age] and illnesses, onerous to the family and useless to fellow citizens”. The explorers Knud Rasmussen and Gontran de Poncins reported that senicide was still practised by the Netsilik of King William Island as recently as the 1930s.

But senicide will never be tolerated in the 2020s, least of all in modern, developed democracies. Those whose sins of omission and commission lead to nationwide senicides will, like the perpetrators of genocides in the 20th century, be judged harshly, not only by history, but also by voters — and quite possibly by judges too.

Niall Ferguson is the Milbank Family senior fellow at the Hoover Institution, Stanford

The deadliest virus we face is complacency

 Our vulnerability to pandemics puts climate change fears in the shade

When I was 11 years old, I was scarred for life by the BBC. It was 1975 and the show was called Survivors. The title sequence begins with a masked Chinese scientist dropping a glass flask. It smashes. We then see him boarding a plane to Moscow, where he starts to feel unwell. Suddenly, a naked arm falls lifeless across the screen. We see passport stamps for Berlin, Singapore, New York . . . and finally London. And then a ghastly red stain spreads across the screen.

The genius of the series was that it was set in middle-class England — a serene Herefordshire of tennis courts, boarding schools and stay-at-home wives. Within 10 minutes of episode one, however, that England was spiralling back to the 14th century. For the Chinese scientist’s flask contained a bacterium even more deadly than Yersinia pestis, which is now generally recognised to have caused the Black Death.

The Black Death — mainly bubonic plague but also the even more lethal pneumonic variant — killed between 75 million and 200 million people as it spread eastwards across Eurasia in the 1340s. The disease was transmitted by flea bites; the fleas travelled by rodent. Up to 60% of the population of Europe perished. Survivors imagined an even worse plague, originating, like the Black Death, in China. The BBC scriptwriters did their homework: the dying had all the symptoms of plague — swelling under the armpits, fever, vomiting of blood. Victims, as in the 14th century, died within a week of being infected. Rats had a starring role in the London scenes.

I have long believed that, even with all the subsequent advances of medicine, we are far more vulnerable to a similar pandemic than to, say, climate change. Bubonic plague was a recurrent killer in Europe until the 18th century and devastated China and India once again in the 1850s and 1890s. In 1918-19, the Spanish influenza pandemic killed between 20 million and 50 million people worldwide, roughly 1%-3% of the world’s population. Even in a normal year, respiratory diseases from influenza kill as many as 650,000 people globally.

So you won’t be surprised to hear that I have been obsessively tracking the progress of the Wuhan coronavirus ever since the Chinese authorities belatedly admitted that it can be passed from human to human.

The coronavirus is much scarier than ebola, which has produced outbreaks and epidemics in some African countries but has not produced an international pandemic because transmission via bodily fluid is difficult, its symptoms are too debilitating and it quickly kills most hosts. Viruses such as the one originating in Wuhan are highly infectious because they are airborne. This variant has the especially dangerous quality that symptoms do not manifest themselves until up to two weeks after an individual becomes infected — and contagious.

I have seen a few rash commentators downplaying the danger. But it is much too early to conclude, as Marc Siegel in the Los Angeles Times does, that the coronavirus “does not currently pose a threat [outside China] and may well never do so”. It is surely a mistake to worry, as did Farhad Manjoo in The New York Times, less about the virus than about “the amped-up, ill-considered way our frightened world might respond to it”. As for the complaint of CNN’s Brandon Tensley that the Trump administration’s coronavirus taskforce was insufficiently “diverse” — namely, it has too many white men — heaven preserve us from woke public health policy.

We don’t know enough yet to say how bad this will be. Among the things we don’t know for sure is the virus’s reproduction number (R0) — the number of infections produced by each host — and its mortality rate, or the number of deaths per 100 cases. Early estimates by the World Health Organisation suggest an R0 of between 1.4 and 2.5 — lower than the measles (12-18), but higher than Sars (0.5). According to Johns Hopkins University in Maryland, by Saturday there were 12,024 confirmed cases and 259 deaths, for a mortality rate of 2.2%. But these numbers are likely to be underestimates.

In the initial outbreak, which began in late December, 27 of 41 infected individuals had direct exposure to the Wuhan food market where (incredibly, given the known risks) live bats were being sold for their meat. Since then, in the space of roughly a month, the disease has reached every province of the People’s Republic. This is far more rapid than the spread of Sars in 2002-3.

One explanation is that the volume of air travel in China has ballooned since Sars. Its 100 busiest airports last year handled 1.2 billion passengers, up from 170 million. Wuhan’s Tianhe airport was almost as busy last year as Hong Kong’s was in 2002. Disastrously, the outbreak came not long before the Chinese lunar new year holiday — the peak travel season — and the regional and/or national authorities were slow to acknowledge how contagious the virus was.

At the time of writing, a total of 164 cases have been confirmed in 24 countries other than China, including seven in America, four in Canada and two in the UK. In other words, we are now dealing with an epidemic in the world’s most populous country, which has a significant chance of becoming a global pandemic.

But how big a chance? How big a pandemic? And how lethal? The bad news, as Joseph Norman, Yaneer Bar-Yam and Nassim Nicholas Taleb argue in a new paper for the New England Complex Systems Institute, is that the answers lie in the realm of “asymmetric uncertainty”, because pandemics have so-called “fat-tailed” (as opposed to normal or “bell-curve”) distributions, especially with global connectivity at an all-time high.

Researchers define the severity of pandemics using “standardised mortality units” (SMUs), where one SMU equals a mortality rate of 0.01% or 770,000 deaths worldwide. A “moderate” global pandemic is defined as causing less than 10 SMU; a “severe” pandemic is above 10 SMU. Yet the average excess mortality of a moderate pandemic is 2.5 SMU, compared with 58 SMU for a severe pandemic. In other words, the mortality rate in a severe pandemic would be about 20 times larger, for a death toll of 44.7 million.

“Standard individual-scale policy approaches such as isolation, contact tracing and monitoring are rapidly . . . overwhelmed in the face of mass infection,” Norman, Bar-Yam and Taleb conclude, “and thus . . . cannot be relied upon to stop a pandemic.” Decision-makers must act drastically and swiftly, avoiding “the fallacy that to have an appropriate respect for uncertainty in the face of possible irreversible catastrophe amounts to ‘paranoia’.”

Thanks to the BBC, I have been paranoid about pandemics for more than 40 years. Nevertheless, the challenge is still to resist that strange fatalism that leads most of us not to cancel our travel plans and not to wear uncomfortable masks, even when a dangerous virus is spreading exponentially. Time to watch Survivors again. At home.

Niall Ferguson is the Milbank Family senior fellow at the Hoover Institution, Stanford

Davos Man is cooling on Stockholm Girl Greta Thunberg

 Delegates publicly praise the gloomy activist, but privately prefer dirty Don

Do you smoke cigarettes, despite knowing the risk that the habit will give you cancer? Do you drive after drinking alcohol, despite being aware that it is both dangerous and unlawful? Do you give speeches about climate change at international conferences, having flown there by private jet? Do you ever sit in a big black car in a traffic jam, when you could quite easily have walked, despite knowing that this, too, is adding yet more carbon dioxide to the atmosphere?

The term “cognitive dissonance” was coined by the American social psychologist Leon Festinger. In his seminal 1957 book on the subject, however, Festinger argued that “in the presence of an inconsistency there is psychological discomfort” and that therefore “the existence of [cognitive] dissonance . . . will motivate the [affected] person to try to reduce the dissonance and achieve consonance”. Moreover, “when dissonance is present, in addition to trying to reduce it, the person will actively avoid situations and information which would likely increase the dissonance”.

My own observations of the human species strongly suggest otherwise. On the contrary, I see all around me — as well as throughout history — countless people not merely comfortable with cognitive dissonance but positively flocking towards situations that increase it.

Take the World Economic Forum (WEF), the gathering of billionaires, millionaires, world leaders, do-gooders, busybodies and journalists that takes place each January in the Swiss resort of Davos. The overwhelming majority of people attending this year’s conference would, I have no doubt, affirm their commitment to reducing carbon dioxide emissions to avert catastrophic climate change, even while on board their Gulfstreams and in their Range Rovers.

I doubt if a single chief executive present at the WEF last week would dare publicly to challenge the view that a modern corporation should rigorously measure and regulate its behaviour in terms of its environmental and social impact, as well as its quality of governance (ESG, for short). As the US Business Roundtable declared last August, firms must now be run not only for the benefit of their shareholders but also for all their “stakeholders”: customers, employees, suppliers and communities. Milton Friedman is dead. Long live Klaus Schwab — founder of the WEF — who pioneered this notion of stakeholder capitalism.

“ESG-omania” (or “ESG-apism”) meant Davos 2020 was an orgy of virtue-signalling on climate change and diversity. To walk down Davos Promenade, the main drag, was to run a gauntlet of uplifting corporate slogans: “Sustainable solutions for Earth, for life”; “A cohesive and sustainable world starts with data”; “Let’s bring sea level and C-level together”.

Each year the WEF’s global risks report tells us what the business elite is most worried about. Ten years ago, the top five risks were “Asset price collapse”, “China economic slowdown”, “Chronic disease”, “Fiscal crises” and “Global governance gaps”. This year? “Extreme weather”, “Climate action failure”, “Natural disasters”, “Biodiversity loss” and “Human-made environmental disasters”.

In this green new world, Davos Man must now prostrate himself before Stockholm Girl: 17-year-old Greta Thunberg, who delivered her latest tirade on Tuesday morning. “We don’t need a ‘low-carbon economy,’ ” she declared. “We don’t need to ‘lower emissions’. Our emissions have to stop. Any plan or policy of yours that doesn’t include radical emission cuts at the source, starting today, is completely insufficient.” She demanded that all participants “immediately halt all investments in fossil fuel exploration and extraction”.

The only public objection came from the US Treasury secretary, Steve Mnuchin. “Who is she — the chief economist?” he asked. “After she goes and studies economics in college she can come back and explain that to us.” Such blasphemy!

Mnuchin’s boss was also present. Four years ago the very notion of Donald Trump was inconceivable at Davos. Three years ago people were stunned by his election. Two years ago they sniggered at him. Now Trump is treated with more respect — after all, he’s still president, impeachment will not lead to his removal and the Davos consensus is that he’ll get a second term. But the applause for the president’s speech was no more than polite and every European participant complained that it was aimed too much at the American electorate. (That made me laugh. Have they no clue what Trump’s campaign speeches are like?)

This is where you might think — if you had read Professor Festinger — that the cognitive dissonance would become unbearable. For privately, after a glass or two of wine, nine out of 10 business people I spoke to admitted that they thought Greta’s speech impossible and Trump’s not altogether bad.

The fact is that the American economy has been doing rather well under Trump — better, certainly, than its European counterpart. Everyone in business knows this. The latest US growth forecasts may point to a slowdown (from 2.3% last year to 2% this year, according to the International Monetary Fund), but that still beats the eurozone (from 1.2% to 1.3%) and Germany (from 0.5% to 1.1%).

As Trump said, in an uncharacteristically restrained speech, American consumer confidence is buoyant, the unemployment rate is the lowest in nearly half a century, taxes on business are down, as is regulation, and the stock market is at record highs. Since his election, the US economy has added nearly seven million jobs. Housebuilding has just hit a 13-year high. Most strikingly, earnings growth has been especially strong for less skilled, lower-paid and African-American workers.

For middle America, Trump’s populist policy mix — immigration restriction, tariffs, easy money and deficit finance — is delivering. In quiet corners of the Davos congress centre you could hear Europeans wishing they could have at least a piece of this American action — and complaining that Greta’s demand for “zero carbon now” was a recipe for zero growth.

Cognitive dissonance is often like this: you say one thing in public and another in private. It was once the basis of life in communist systems all over the world. It turns out to be something capitalists can do just as easily, with very little of the discomfort predicted by social psychology.

But be warned. It is not always the case that private thoughts are right and public ones wrong. If Davos Man has come around to Trump — enough to expect, if not quite to hope, for his re-election — it is no guarantee that he will win on November 3. If January 2016 is anything to go by, you should probably bet against the Davos consensus and have a flutter on Bernie Sanders.

In the same way, if it’s climate change the WEF-ers are most worried about, you should probably brace yourself for a coronavirus pandemic. Talking of cognitive dissonance, what the hell were we all doing at a massive global conference last week? Fact: at least three of the WEF attendees were from — you guessed it — Wuhan.

Niall Ferguson is the Milbank Family senior fellow at the Hoover Institution, Stanford

miscellany
Publication Name
45 Article Results