Saturday, 9 November 2013

Is this thing on?

If you're a regular reader (hello, both of you!) you may have noticed a slight gap in the update schedule since, er, July.

One of the great pleasures of having only a small following is being able to sod off for a few months without upsetting anyone. I imagine if ChemBark or Derek Lowe did so, their inboxes would very quickly fill with confused messages checking if they were still breathing...

Since I updated earlier tonight with a short rant I thought I'd take the chance explain my absence and to engage in some shameless self-promotion.

A big part of this is that I've simply been online less, and therefore had much less involvement in the chemistry community - and less to say. Maybe that's a blessing. What writing I have done has been published elsewhere: I wrote a blogroll and book review for Nature Chemistry, and some news articles for The Conversation. All of which is pretty exciting: it's great to get feedback on my writing from professional editors and I feel that I've learned a lot.

Final comment: there won't be a #chemclub review for November, as unfortunately the contributor this month had to cancel due to more important commitments. Regular updates resume next month.

Unlikely results?

The Economist's "daily chart" from October 21st came with a striking headline: "Why most published scientific research is probably false".

The accompanying video explains that under certain assumptions, we are drawn inexorably to the conclusion that most scientific results are false. I won't outline their logic here: the video is only a minute and a half long, so go ahead and watch it. I'll wait.

Did it annoy you as much as it annoyed me? The claim - "most published scientific research is probably false" - rests on three assumptions:
  • Most science uses a statistical significance cut-off of p=0.05, and effectively no 'insignificant' results are published.
  • 10% of hypotheses tested will turn out to be correct.
  • The false negative rate is high - possibly as high as 40%.
This is nonsense.


Maybe I'm a little late to the game in criticising this, as it's a few weeks old now, but it's going around my Facebook feed this weekend without any criticism so I thought I'd comment. The tl;dr of this post is "your headline is bad and you should feel bad".

Firstly: entire disciplines that are firmly "science" rarely if ever use statistical significance as a criterion for publication, or have far more demanding requirements. Most organic chemistry falls into the former group, and particle physics into the latter. Are these disciplines negligible to science as a whole?

Secondly, the ratios of true to false positives and negatives in this thought experiment is heavily dependent upon the starting assumption that 10% of hypotheses tested are true. I don't particularly object to the number they chose - but it is arbitrary and unjustified. If we're a little more generous to scientists then the problem diminishes (and vice-versa). If you think scientists are pretty good at generating likely hypotheses then when you crunch the numbers in this fashion, you'll get a result that can inspire confidence. If you think we're generally on fishing expeditions, testing a dozen hypotheses for every validation, then you'll end up with a pessimistic view of the literature.

Finally, at the end of the video the author ramps up the estimate of the false negative rate to utterly overwhelm the number of true positives and draw the conclusion in the headline. What's the basis for the estimation of false negative rates here?

The whole thing smacks of an idealistic view of science as a discipline with unified practices, which operates according to a classic philosopher's view of the scientific method with a healthy dose of publication bias: all scientists operate by testing well-defined hypotheses one by one and analysing the results through statistics, publishing only those with p<0.05.

This kind of thought experiment can usefully explain the crisis of reproducibility in certain disciplines which do rely heavily upon p values for publication, such as some biomedical sciences. But the unqualified extrapolation of this to Science™ is absurd. The Economist can do better.

(This Saturday night rant brought to you by an overabundance of caffeine.)