Is science really better than journalism at self-correction?


Rolling Stone’s retraction of an incendiary article about an alleged gang rape on the campus of the University of Virginia certainly deserves a place in the pantheon of legendary journalism screw-ups. It is highly unusual – although not unprecedented – for a news organization to air its dirty laundry so publicly.

One meme that’s emerged from the wreckage is that journalism ought to be more like science, which, it’s thought, is the epitome of a self-correcting system. In a story about the Rolling Stone retraction, for example, the New York Times reported that Nicholas Lemann, former dean of the Columbia Journalism School, teaches his students that the “Journalistic Method” is much like the scientific method:

It’s all about very rigorous hypothesis testing: What is my hypothesis and how would I disprove it? That’s what the journalist didn’t do in this case.

That’s a pretty analogy – but even in science, the reality doesn’t live up to the ideal.

Steve Coll’s exhaustive investigation into what went wrong with the magazine story was rare enough it warranted a press conference. Rolling Stone and Columbia Journalism Review both published the report.
Reuters/Mike Segar

Mistakes were made

What’s certainly true, as a definitive 12,700-word report by Lemann’s successor at Columbia, Steve Coll, and colleagues points out, is that Sabrina Rubin Erdely, who wrote the magazine article, did not attempt to disprove her hypothesis by interviewing the alleged perpetrators of the alleged rape. Nor did Rolling Stone’s editors require her to go back and do such reporting before publishing the article.

We’d consider that akin to a failure of peer review, the process by which experts look for problems in methodology that might undermine a scientist’s conclusions. When you’re not pushing yourself – or someone else – to look for those problems, one possibility is that confirmation bias wins. That natural tendency for all of us to look for evidence that supports a narrative or theory we believe to be true is very powerful.

It’s also true that two of the three broad categories for why media outlets retract articles, as described in the New York Times, are roughly the same for science: outright fabrication and plagiarism. (The third category, and the one which pertains in the Rolling Stone debacle, relates to lack of skepticism. We’ll get back to that in a moment.)

Ideals of scientific publishing are a standard to emulate

But the similarities end there.

Science, and scientific publishing, rarely tells the story of a single event. Published papers, particularly in the world of biomedicine, typically relate what happened in experiments involving multiple tests. What Lemann is in fact describing is just one small, although essential, aspect of the scientific method – the effort to identify and eliminate bias in one’s thinking or testing of a hypothesis.

When science works as designed, subsequent findings augment, alter or completely undermine earlier research. When something new emerges that revises the prevailing wisdom, scientists can, and often do, correct the record by retracting their earlier work.

So many journals, so many errors?
Tobias von der Haar, CC BY

Reality falls short

The problem is that in science – or, more accurately, scientific publishing – this process seldom works as directed.

Rolling Stone’s in-depth exposé of its own failings is a unique case.
Rolling Stone

Through our work on Retraction Watch, we have found that journals – even when they end up retracting, which is not as often as they should – rarely give a full and clear picture of how and why a paper went off the rails. Retraction notices in science typically do not resemble the explications one finds in newspapers when an article is pulled – and never do they involve a report as detailed as Coll’s overview of the admittedly unique Rolling Stone case. Some journals even have advised readers to contact the authors of the original papers for more information, which somehow strikes them as a reasonable course of action, rather than publishing for all to see what the issues were that led to the retraction.

While media watchdog Craig Silverman has done terrific work cataloging journalism corrections, as far as we know, no one has a comprehensive list of newspaper and magazine retractions, which seem to be less frequent than scientific retractions. Scientific journals still retract very rarely: Between 400 and 500 articles each year, compared to roughly 1.4 million papers published.

That brings us back to lack of skepticism. Just as a good narrative sells in the media, a compelling storyline carries outsize weight in science. Journals are more likely to publish positive findings than negative results. And as emerging scholarship shows, it’s not unusual to publish studies that simply are not true. That’s confirmation bias at work again, aided and abetted by the way many scientists use statistics. Simply put, if you do 20 experiments, one of them is likely to have a publishable result. But only publishing that result doesn’t make your findings valid. In fact it’s quite the opposite.

Why does this happen? Because the entire scientific community, from the junior researchers to the editors-in-chief, are vulnerable to the same sort of credulity from which Rolling Stone’s editors suffered, which is a particular form of confirmation bias. The result, in biomedical sciences, at least, is a crisis of irreproducibility. Simply put, much, if not most, of what gets published today in a scientific journal is only somewhat likely to hold up if another lab tries the experiment again, and, chances are, maybe not even that.

Science journals look so solidly accurate….
Nicole Hennig, CC BY-NC-SA

Everyone can and should do better

Journals, like magazines, ought to be held to a higher standard for the material they publish. Just as they are ranked on how often their articles are cited (the so-called impact factor), they ought to be rated on how often they retract papers and how forthcoming those notices are. They also should be graded on how many of the findings they publish are reproduced by future studies, what we’ve called a “reproducibility index.”

But there are other real-life consequences: after the Coll report appeared, the UVA fraternity at which the rape purportedly occurred announced it would file a lawsuit against Rolling Stone for the misleading coverage. Lawyers can play a big role in science, too; we’ve seen a number of cases recently in which vigorous representation tries to keep the real story out of retraction notices – and out of the public eye.

Lawsuits are not the only fallout from error and fraud, however. A 2013 study found that scientists who retract papers for fraud are likely to see what you’d expect: a dip in citations to their other work. In fact, entire related fields can see those dips. But what was heartening from that study was that researchers who voluntarily retract papers for honest error actually saw a small bump in their citations. If scientists, unconsciously or not, are rewarding good behavior, doesn’t it make sense that people reading newspapers and magazines will, too?

As two people working in one field – journalism – and covering another – science – we’ve become very conscious of the fact that reporting and research are both subject to the same human frailty as every other human endeavor, and it’s not clear to us that either is better at self-correction. Both fields can learn from one another. They can also learn from fields such as surgery, in which successful operating rooms have realized that empowering everyone, not just senior surgeons who might be reluctant to acknowledge their errors, means safer patients. Admitting we’re human is difficult, but boy does it make a difference.


This piece was co-authored by Adam Marcus. Oransky and Marcus are co-founders of Retraction Watch.

The Conversation

This article was originally published on The Conversation.
Read the original article.

on Twitter