In March 2009 the deputy chief of Italy’s Civil Protection Department and six scientists who were members of a scientific advisory body to the Department held a meeting and then a press conference, during which they downplayed the possibility of an earthquake. Six days later an earthquake of magnitude 6.3 killed 308 people in L’Aquila, a city central Italy. Yesterday, the seven men were convicted of manslaughter and sentenced to six years in prison for failing to give adequate warning to the residents about the deadly disaster.
The news reports imply that the scientists were sentenced because of their failure to predict the earthquake. But Roger Pielke, Jr., a professor of environmental studies at the University of Colorado, says “one interpretation of the Major Risks Committee’s statements is that they were not specifically about earthquakes at all, but instead were about which individuals the public should view as legitimate and authoritative and which they should not.”
Whether it was because of their predictions or because of the authority with which they made their claims, the scientists were sent to prison for making an erroneous prediction about how nature would act. Such a judicial ruling would strike most of us Americans as absurd. We’d rightly assume that it might provide scientists with an incentive to not make any predictions at all. As Thomas H. Jordan, a professor at the University of Southern California, says, “I’m afraid it’s going to teach scientists to keep their mouths shut.”
This seems reasonable until you consider what this says about the current incentive structure. As Stephen J. Dubner recently wrote, “the world is awash in prediction, much of it terrible, in large part because there are strong incentives to make bold predictions and weak penalties for bold predictions that e true.”
This would be a trivial concern if there was no cost associated with “bold predictions that e true.” But in many cases someone—though often not the predictor—has to pay a significant price to either protect against the predicted e or to prevent it from occurring. Take, for example, the case of anthropogenic climate change. Some scientists claim that we need to take drastic (and expensive) action to prevent global warming. Other scientists claim the threat is overstated and believe we should avoid implementing costly preventive measures.
In the first case, climate scientists expect the public to make an expensive bet that they’ll be proven right. In the later, scientists expect us to make a low cost bet that they will be correct—even if we will have to pay dearly later if they turn out to be wrong. In each case, the brunt of the cost of being wrong is transferred to the non-experts. The experts, however, often have an incentive to make a bold prediction even if there is a low probability of their being right. For them, there is almost no downside for being wrong. But for the rest of the world, economic deprivation or even loss of freedoms could result from their erroneous prediction.
What if scientists (and other predictors) faced a penalty for their inaccurate claims? Sending scientists to jail for being wrong about earthquakes is probably excessively harsh, of course. But what if they lost their job or had to pay a stiff fine when their prognostications failed e true? I suspect the result would be that fewer bold predictions would be made and that the ones that were would be more reliable and based on incontrovertible evidence. Whatever the case, we would likely all be better off if the personal cost of being wrong were substantially higher.