What makes good science, and why pundits are so bad at making predictions

I just posted a pretty lengthy response to a blog post by Brian Appleyard that was linked to on Sullivan:

Writing about Jonah Lehrer’s book on decision making in The Sunday Times, I didn’t mention the findings of Philip Tetlock at Berkeley. He studied pundits and discovered they were, to a rough approximation, always wrong when making predictions. He took 284 pundits and asked them questions about the future. Their performance was worse than chance. With three possible answers, they were right less than 33 per cent of the time… 

Tetlock said: ‘The dominant danger remains hubris, the vice of closed-mindedness, of dismissing dissonant possibilites too quickly.’

My response follows:

Interesting post! Tetlock’s work is pretty cool, but this point has been made before, over 100 years ago, in the great paper “The Method of Multiple Working Hypotheses” by TC Chamberlin:

The moment one has offered an original explaniation for a phenomenon which seems satisfactory, that moment affection for his intellectual child springs into existence…. There is an unconscious selection and magnifying of the phenomena that fall into harmony with the theory and support it, and an unconscious neglect of those that fail of coincidence….There springs up, also, an unconscious pressing of the facts to make them fit the theory…. The search for facts, the observation of phenomena and their interpretation are all dominated by affection for the favored theory….”

Chamberlin understood that his theory of multiple working hypotheses, where a thinker is constantly examining problems with multiple lenses, could counteract the biases related to “intellectual parentage” through an “effort to bring into view every rational explanation of new phenomena, and to develop every tenable hypothesis respecting their cause and history.” His target audience where the fellow hard scientists who read Science, the journal in which his article was published, but his theory can be applied to the social sciences just as much as biology or physics. We’ve all assumed, and now Tetlock has proven, that political pundits are usually full of hot air, and Tetlock is right on target when he blames closed-minded hubris. Hubris and bias toward “the facts that fall happily into the embrace of theory, [but] a natural coldness toward those that seem refractory” don’t mix well with rational scientific thought and progress. Pundits (and politicians) today find success only if they can carve out a deep niche in which they can reliably spew out comfortable and predictable opinions that are easily washed down by a cool glass of pre-favored theories, rather than a hard-to-swallow shot of openness toward multiple theories and other possible explanations. This is why they are so often wrong: pundits today don’t actually give the problem much thought in its own broader context. They think and write, but more about how to fit a problem into their well-established box than about how one would address the problem in a rational and creative way.

John Platt updated and reiterated Chamberlin’s theory in another great paper in Science in 1964. By then, multiple working hypotheses had technically become the norm, but Platt felt that the scientific community needed a reminder that science progresses best by “strong inference,” (also the title of his paper) which he sums up in 3 steps:

1) Devising alternative hypotheses;

2) Devising a crucial experiment (or several of them), with alternative possible outcomes, each of which will, as nearly as possible, exclude one or more of the hypothesis;

3) Carrying out the experiment so as to get a clean result.

This all seems pretty basic, but Platt points out that nobody really does it anymore. Oftentimes steps 1 and 2, those that “require intellectual inventions, which must be cleverly chosen so that hypothesis, experiment , outcome and exclusion will be related in a rigorous syllogism” are passed up entirely, and the previously proposed theories are lazily accepted as the touchstone for everything that the world throws at them. Pundits have it tough; political or economic science doesn’t afford the same opportunities for experimentation, but this doesn’t mean that pundits and politicians can still skip steps 1 and 2; even in a thought experiment, multiple outcomes and multiple hypotheses must be considered and eliminated using whatever data is available and a hearty helping of reason. Platt goes on to praise figures like Louis Pasteur, who “every two or three years…moved to one biological problem from another,” bringing to each “a completely different method of reasoning.” Today’s pundits, as well as many academic political scientists and economists, become too well known as “the guy who’s against free trade,” or “the fervent neoconservative,” for example; many may earn some fame for a new theory, but then settle into polishing their gem and using sloppy reason to bend it to answer every problem, and every circumstance. Since Reagan, the GOP has proposed tax cuts as the proper and appropriate reaction to every economic change: the economy is expanding? Tax cuts! The economy is marginally shrinking? Tax cuts! The economy is in a recession? Tax cuts! At no juncture do they entertain any other hypothesis, nor the possibility of any other outcome.

Platt also was aiming his paper at hard scientists, but social scientists (and pundits and journalists) ought to give it a good hard read. I have confidence that the Obama administration has embraced strong inference, at least more than any other administration in the recent past. Obama has already admitted mistakes, and realized that every theory, and every idea may not be the right one. Rigorous evaluation of multiple proposals and theories (including those that may not fit the historical American definitions of liberal) to fix the economy and run the government seems to be taking place more with Obama, and this is certainly a good thing. Let’s just hope that he too doesn’t settle in with what’s familiar, and continues to be a strong thinker throughout his time in office.

If anyone is interested:

Chamberlin, T.C., “The Method of Multiple Working Hypotheses

 Platt, John, “Strong Inference

Advertisements

One thought on “What makes good science, and why pundits are so bad at making predictions

  1. george brett says:

    i think you should differentiate more between prediction and explanation. the pundits are being asked to predict future events, while the Science article you discuss targets scientists who are seeking explanation for observed occurrences, which they are then able to test in at least partially controlled environments.

    i would compare the pundit point more to a study of economic forecasting by James Montier, “Global Equity Strategy: The Folly of Forecasting: Ignore All Economists, Strategists, and Analyst,” published in Global Equity Strategy, August 24, 2005. He makes the point that “economists are really good at telling you what has just happened!” he has some very compelling data.

    Tetlock and Montier agree that hubris plays a central role in this phenomenon, although Montier calls the factor “overconfidence.” overconfidence, as Montier uses the term, refers to a “situation where people are surprised more often than they expect to be.” so a person is “well calibrated” if, when asked 10 questions and instructed to give a range within which they are 90% confident the answer will fall, their range will be correct for 9 of 10 questions.

    Many experts are overconfident, but Montier tested, among others, weathermen and doctors. while weathermen seem to be fairly well calibrated (e.g. when 80% sure they are correct, they are correct 80% of the time), doctors were right only 15% of the time when they were 90% sure they were correct. investment professionals also fared poorly.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: