The Folly of Prediction

“Fact: Human beings love to predict the future. — Fact: Human beings are not very good at predicting the future. — Fact: Because the incentives to predict are quite imperfect — bad predictions are rarely punished — this situation is unlikely to change.” This is how an article by “Freakonomics” author Steven J. Dubner starts. Why is that and which predictions can we actually believe in?

 

In his article on the Freakonomics websiteSteven J. Dubner elaborates a bit further on this. What does it mean that “the incentives to predict are quite imperfect”? Steven Lewitt, Dubner’s co-author, points out that whenever we make a crazy prediction and this prediction becomes true, we and others keep reminding everyone of it, whereas the many other crazy predictions that never became true are not talked about any more. 

The article outlines a few studies on the quality of predictions. For example, we can see that experts often know little more than laypersons, and their predictions are often only slightly better than purely random ones. But they are not aware of this fact, but rather believe strongly in their expertise (this, by the way, was a rather famous study by psychologist Phillip Tetlock). Furthermore, we often misinterpret the verbalization of predictions. When something “could” happen, this means that it is likely to happen on a huge continuum, ranging from extremely unlikely to extremely likely. Moreover, when someone predicts an extreme outcome, we tend to overestimate that person’s accuracy in future predictions. Thus, one could consider bad predictions as an interaction of experts who are not quite as good at predicting something on the one hand and laypersons who would simply love to have a predictable world and thus interpret predictions more deterministically than they actually are on the other hand. 

Why are there so many bad predictions out there? Economist Robin Hanson who is cited in the article thinks that one problem is that we better say nothing when we don’t know what to predict. However, what frequently happens is that e.g. journalists ask an expert for a prediction, and even if the expert has nothing to say, he or she will try to make a forecast just in order to say something. Robin Hanson is an advocate of what he calls a prediction market, i.e. a market in which only those who really have something valid to say do speak up, whereas the others say nothing (instead of making invalid predictions). 

There was a whole radio show on this topic. Its transcript is available on the Freakonomicswebsite

What does this mean? If you consider yourself to be an expert in a certain field, be aware of the fact that you might be overconfident. Constantly challenge your own predictions. If you are a layperson, be careful with predictions so-called experts make. The fact that they have been right once or twice does not mean that they will always be right. Take a closer look at previous predictions the expert has made and find other experts who make forecasts in the same area. Compare them and always remain critical. Remember: it is hard to make predictions, especially about the future. 

Becoming smarter by online brain training?

Some time ago, we reported on a few studies that had found it to be possible to improve our ability to reason, to solve problems, and to deal with new information, in short, our fluid intelligence. There are quite a few commercial online brain training programmes that build on these findings, and there are many people out there who buy them. But do they really work? 

In an article in The Guardian / The Observer, journalist Elizabeth Day reviews some of the research on the effectiveness of such programmes. She summarises the study by Jaeggi and colleagues that found working memory to be improvable, which in turn improved participants’ performance on an IQ test. We reported on this study and a subsequent one in an earlier post. However, she also reports on a study byThomas Redick from Purdue University and his colleagues that failed to replicate the findings by Jaeggi and colleagues. Adrian M. Owen and Adam Hampshire from Cambridge University summarised the findings and conducted their own study usingvarious different tasks, not only one. They come to the conclusion that people improve on the tasks they train, but that there is no transfer effect to other tasks, even if they are closely related.

 

In an interview, David Z. Hambrick from Michigan State University, one of the co-authors of the aforementioned study, outlines the findings by Jaeggi and colleagues and his own findings. He makes a few interesting points in this interview. He says that even if we improve our performance on a reasoning test, this does not mean that we are more intelligent because intelligence is a whole bundle of abilities. He also outlines that it is not yet clear what actually is the basis for intelligence or its common factor and thus what we actually improve by brain training. It may be working memory, but it may just as well be attention, or both. But he makes it clear that we do not have convincing evidence for the fact that we can really improve our intelligence by online brain training, nor do we have evidence that it is NOT possible. 

In the end, he suggests two things: improving our crystallised intelligence (i.e. knowledge) and doing physical exercise. Both are known to be improvable, whereas with online brain training, we are simply not sure (yet). We will keep you updated on the recent developments in this field!