I read with interest reports of a recent ‘study’ by a scientist and journalist who managed to get a flawed piece of research published in a (non-refereed) scientific journal from where it was taken up by the popular press (http://io9.com/i-fooled-millions-into-thinking-chocolate-helps-weight-1707251800). The ‘research’ claimed to demonstrate that chocolate was a beneficial addition to weight reducing diets. Although they used perfectly correct statistical techniques they applied these techniques inappropriately, showing a significant effectwhen one did not really exist. This is the problem with statistics: it is not just about doing the maths correctly, but more about choosing the correct test to use and applying that test appropriately. Hence the quote in the title. If an inappropriate technique is used to analyse data than something may seem to be real when in fact it isn’t. This is what happened in this work as they discuss in the article referenced above.
The authors aimed to highlight poor reporting of science by the popular press, but have been both praised and criticised in fairly equal measure (http://blogs.discovermagazine.com/science-sushi/2015/05/31/believe-in-the-chocolate-diet-i-have-a-box-jelly-antivenom-to-sell-you/#.VZHB-VygKG9).
However, I think the debate is useful. Most importantly, they highlight the dangers of thoughtless reproduction of the claims of published research. Journalists may perhaps be forgiven for thinking that as this work was published in a scientific journal it should be reliable. Unfortunately, it can be difficult to decide how valid published work is. However, you would hope that someone writing as a scientific journalist would at least have the basic skills necessary to decide what is real and what is not? Publication in a refereed journal is pretty essential. The refereeing process means that the work has been reviewed by 2 or 3 experts in a particular field, before it is accepted for publication. Now this process itself has often been criticised (the subject of a future post!), but in general it is a useful safety check. The Chocolate Question was published in a non-refereed journal. This alone should raise questions. However, not all refereed scientific journals are equal. Journals are given a citation index reflecting how important the scientific community believes the work to be on the basis of how often other people quote work from the journal: a scientific version of number of ‘likes’ on Facebook? It is easy to check the citation index for a journal online. Ranking tables are produced by subject. (See Blog Dec 20th 2015 Online Journal Selection Tools https://researchmedics.com/online-journal-selection-tools/). Although this is not the be all and end all, it is a useful guide to how reliable a piece of work should be. So I think the ‘Chocolate Question’ is a useful prompt to think about care in citing the work of others and to be critical about what published work we give a wider audience and as clinicians what work we allow to influence our clinical practice. Even if we do not have the skills to dissect the way research has been done and analysed ourselves, the suggestions set out above would have raised questions about the Chocolate Question even for the non-expert. I must end by acknowledging that much good work is published in journals that do not rank highly and work in the best journals can still be misinterpreted or misquoted…..but that will have to wait for another post!