Why People Still Believe in Predictions, Even Though They’re Rarely True
The internet is just a fad. The world will end in 2012. Elon Musk and Grimes will soon live on separate ends of Mars. These are three events forecasted in recent history, much like trifling weather announcements. The desire to predict, and believe in these flights of reality, is one unflinchingly human. Think of Nostradamus’s vaguely-framed prophecies; the George Orwellian spirit we evoke; the Simpsons’ scene that often becomes meme fodder; or the verbose mirror from whom the evil stepmother seeks answers.
Experts have tried to forecast the price of oil since the dawn of the oil industry in the 19th century; they’ve maintained their track record of being wrong since. The curve global economies will adhere to, the chequered demands within the job market, what brand of wellness consumers will pick, how quickly will AI change our ways of being — everything is fair game for forecasters of the future. Ironically, studies have ruthlessly shown experts are probably the worst at predicting things.
Before we break down the “why” behind this itch to anticipate things, here’s a spoiler for them and us: “Predictions fail because the world is too complicated to be predicted with accuracy,” as author Dan Gardner, who wrote Future Babble, noted with authority. Time is non-linear, people change, ideas change, socio-political structures holding it all together collapse and reinvent. Then, making false predictions is the norm, not the exception.
Then, why do people still believe in predictions? The art of intercepting the future is not new. We are programmed to avoid uncertainty and ambiguity, our brains are literally wired that way. “From an early age, we respond to uncertainty or lack of clarity by spontaneously generating plausible explanations,” writer Maria Konnikova noted in the New Yorker. Once we have explanations, we don’t let go of them — they make us feel more in control (than we actually are). The internet then feeds on these considerations about what will, what won’t, and what may happen.
Then, there is the matter of the mind. We have a lot of biases. One of them is people’s tendency to be unrealistically optimistic. “People don’t say, ‘It can’t happen to me.’ It’s more like, ‘It could happen to me, but it’s not as likely [for me] as for other people around me,'” Rutgers University psychologist Neil Weinstein told The Atlantic. In other words, the belief that things we want to happen will actually happen ranges on (misplaced) security. Weinstein discovered this shade of optimism back in 1970, explaining why people predict that they’re less likely than others to experience illness, injury, divorce, any and every adverse event — even when they face similar risks.
“For example, psychologists have shown that people very easily convince themselves that a random bit of good luck was, in fact, the result of skill. Even when the task at hand is guessing which side of a coin will turn up when it is flipped — the very symbol of randomness — people are easily convinced that their correct guesses were the result of skill, not luck,” Gardner told the Economic Times.
Related on The Swaddle:
Unrealistic optimism may sound innocuous in theory; after all, what damage can billions of people believing in the best of the world do? A lot. “When people predict their future behavior, they tend to place too much weight on their current intentions, which produces an optimistic bias for behaviors associated with currently strong intentions,” researchers noted in 2014. This makes people less sensitive to situational barriers — such as obstacles or competing demands, that may “interfere with the translation of current intentions into future behavior.”
The blind spot doesn’t exist in silos. There’s a “projection bias” closely protecting our optimism. People tend to assume others have similar opinions as to their own, fuelling a belief in predictions. In other words, it is a “self-forecasting error,” where people overestimate how much our future selves will share the same beliefs. Two years ago, people would have thought everyone would be equally cautious when it comes to living during a pandemic. Two years later, people’s vigilance and desire to conform have fallen.
There’s also a lovely theory called “outcome-irrelevant learning”; people draw whatever lessons they want to from history. This makes them feel they are in an excellent position to explain what happened before was consistent with their view. So what will eventually happen is within their control too. Experts have noted people are notorious for erroneously basing predictions on our past experiences (oh, the irony of using the past to predict the future). The Fukushima nuclear reactor was built to withstand a catastrophic historical earthquake, but it failed disastrously when the 2011 tsunami struck. “This is not a failure of analysis; it’s a failure of imagination,” author Morgan Housel noted.
Plus, when we get new information, a “confirmation bias” activates, emboldening us to think the new information fits into what we already believe to be accurate.
In the end, people are likely to believe anything if the outcome impacts them. This is what scholar Daniel Defoe wrote in 1722 when chronicling the Great Plague of London:
The people were more addicted to prophecies and astrological conjurations, dreams, and old wives’ tales than ever they were before or since … almanacs frighted them terribly … the posts of houses and corners of streets were plastered over with doctors’ bills and papers of ignorant fellows, quacking and inviting the people to come to them for remedies, which was generally set off with such flourishes as these: ‘Infallible preventive pills against the plague.’ ‘Neverfailing preservatives against the infection.’
People’s willingness to believe a prediction is decided by how much one needs that piece of divination to be true. People predict because they dislike uncertainty, but uncertainty makes individuals more likely to reach ridiculous predictions due to their partiality. Think of the pandemic: prediction after prediction sticks like bricks to make an unstable tower of the future.
What makes us so bad at predictions, though? For one, people are bad at synthesizing data. Humans are very bad at understanding statistical trends and long-term changes,” political psychologist Conor Seyle told BBC Future. Even Stephen Hawking once said we can’t predict the weather more than a few days in advance. This was proven through an experiment: while meteorologists were able to predict a five-day forecast with 90% accuracy, the same statistic fell to 50% for a 10-day forecast.
Related on The Swaddle:
Plus, we don’t even pay attention to the data most of the time. We think it’s the lack of humanity of algorithms that make them bad at predicting things. So we tend to rely on what we think we know when we make decisions. According to one 2008 study, researchers looked at brain scans to pinpoint the moment participants made a decision; it was about 10 seconds before they themselves realized they had decided something. It was not the data, but our guts driving the decision.
Then, people are just not great at spotting long-term trends. The alertness to what is happening gradually decline
s, — more so over generations. This may be because longstanding trends give a false sense of complacency to people. Just one instance: in December last year, a survey recorded expectations and predictions of some 22,000 adults. Almost three-quarters of them said 2022 is going to be a much better year — in comparison to the chaotic world they know. This was conducted in a pre-Omicron era, when the hope of vaccinations and relative stability sprung eternal.
This may be because we don’t know which factors are the most important. Quartz cited this example: “in 1911, Thomas Edison predicted that the homes of the future would be replete with steel furniture. It was a decent guess — the material was durable, inexpensive, and ubiquitous. But he forgot that humans don’t really like steel in their homes, both physically (it’s uncomfortable to sit on) and aesthetically (doesn’t produce that cozy vibe).”
Arguably, making accurate predictions may not be biologically possible because humans may not have the brainpower for it — we would quite literally need more glucose. “Most of the time, our estimates are accurate enough to keep us alive and propagating the species. If you were doing a lot of higher-level computation, you would need more brainpower,” Susan Weinschenk, a behavioral scientist, said. We then care little about accuracy and more about ballpark estimates.
To get a better glimpse of the future, we’ll need to put our biases at bay and get better at trusting numbers and data, and trends. Or one could just embrace uncertainty and roll with the punches. If not, maybe it’s time to dress in fanciful robes and utter: Mirror, mirror, on the wall, what does 2022 hold amid the chaos of it all?