How Did That Make It Through Peer Review?

mathematics-757566

The title’s question is one I’ve heard asked many times over the years. It has been uttered by senior colleagues, grad students, amateurs, and just about everyone else, too. The query is usually raised in response to a breach of fact, omission of citations, misconceived analysis, or odd conclusion from a published paper.

So, how did that make it through peer review?

The usual answer people seemingly have in mind is some combination of editorial incompetence, reviewer laziness, or authorial shenanigans. Maybe the editor didn’t have the expertise to evaluate the paper, or ignored the reviewers, or was talked out of listening to the reviewers by a slick author, or was a personal friend of the author. Maybe it was sent to unqualified reviewers, or the reviewers just were sloppy.

Having been in the researching, reviewing and editing game for a few years now, and having listened to the professional rumor mill for a few years too, I can say that sometimes those answers are the case. For my own part, I’ve received reviews that were clearly written in haste, with maybe three sentences of superficial “accept with minor revisions” comments, contrasted against a lengthy “burn it with fire and bury the ashes” opinion from another reviewer. I’ve seen papers published with pretty egregious errors, for which it is hard to imagine how the reviewers didn’t catch it.

Yet, I suspect these kinds of situations are relatively rare. Having been involved in enough papers, and, yes, being party to papers where I didn’t catch something in the review or editorial process, I have the ultimate answer:

Reviewers, editors, and authors are human.

What I mean by this is that scientific papers are complex beasts. A single manuscript may weave together disparate groups of organisms, unfamiliar pieces of anatomy, far-flung reaches of the globe, and multiple statistical techniques. A typical paper is usually seen by a single editor and two to four reviewers. It is extremely unlikely that every facet of the paper will be seen by an appropriate expert on that given facet. How likely is it that every error will be caught and addressed?

Let’s give a hypothetical example. Say I’m writing a paper on a new species of raptorasaur from Wisconsin, with a statistical analysis of how it expands known anatomical disparity for the group. I send it to the Journal of Dinosaur Paleontology, but they don’t have any editors who specialize in raptorasaurs. So, it goes to an expert on dinodonts (we’ll call her Editor A). The dinodont editor does a quick literature search and finds three raptorosaur experts. We’ll call them Dr. B, Dr. C, and Dr. D. They review it, suggest some minor changes to the anatomical descriptions, and after a round of revision it is published.

And then, oh, the humanity! Blogger E reads the paper, and is dismayed by the disparity analysis. A critical data correction wasn’t done, so the supposedly significant results are meaningless, and in fact are the opposite of what the paper concluded. Dr. F thumbs through the paper, and finds that one of his papers on raptorosaur morphometrics wasn’t cited. Grad Student G finds that several of the characters in the phylogenetic analysis was miscoded and the phylogeny should be a bit different than the authors presented, and so says that the overall conclusions of the paper are highly suspect.

The modern Australian lungfish Neoceratodus. Public domain, modified from Flower 1898.
The modern Australian lungfish Neoceratodus; just the perfect thing to break up a long block of text. Public domain, modified from Flower 1898.

How did all of this happen? Well, none of the reviewers had statistical expertise. They were all well qualified to assess raptorasaurs, but none had ever done a disparity analysis themselves. They did a cursory glance, thought it looked reasonable, and moved on. When they suggested references to add, many of them were their own papers, and…well, the raptorosaur morphometrics paper fell through the cracks, or just wasn’t as relevant as another paper. As for the phylogenetic analysis…it is very rare indeed for reviewers and editors to check all character codings*. Dr. C might have caught some of the miscodings, but was in the midst of grading term papers, and didn’t have more than a few hours to devote. A few of the codings were impossible to check in any case, because they were for incompletely described fossils from a distant museum.

This is not to downplay the fact that editorial and reviewer incompetence does exist. There are certainly journals with less stringent editorial processes than others, as well as editors and reviewers who are not well suited for their jobs. Some authors submit manuscripts that are on the low end of the quality scale. Yet, for the bulk of journals and the bulk of manuscripts, I think most people put forth a genuine good-faith effort. I have seen errors or editorial/reviewer lapses in pretty much every journal I have read. This ranges from PLOS ONE to JVP to Nature to Cretaceous Research. Finally, I will confess that I have played the part of pretty much every character in the hypothetical above.

So, how can we productively deal with this? I don’t have a single easy solution, but I do have a few thoughts.

First, I think that respectful (emphasis on respectful) discussion in informal venues is useful–social media, journal clubs, etc. Frequently, I miss something in a published paper until a colleague raises the point. I have learned a lot via these discussions, and I think they are an important part of the field.** Blogs and social media are just the kinds of venues where small quirks in papers can be noted (and if done as comments at the journal itself, as at PeerJ or PLOS ONE, it can be maintained as a semi-permanent record with the paper***). If possible, a formal public comment or correction may be warranted, particularly for large scale errors (but as described in the footnotes**, there are practical limitations to this, and I don’t think formal corrections preclude informal discussion).

Second, I think the overall situation makes a strong case for a more open peer review process. Only a small set of relevant experts might see a paper before publication, and it is inevitable that they won’t catch everything. Editors and reviewers aren’t omniscient. I genuinely believe that preprints prior to or alongside the formal review process would allow more of these kinds of issues to be addressed. They wouldn’t eliminate errors (and there are certainly cases where a public preprint wouldn’t be beneficial), but they would help. At the very least it would cut down on the “Well, if I had been a reviewer, that error wouldn’t have been published” grouching.

Third, this is not a case for the elimination of pre-publication peer review. Pre-publication peer review, although imperfect, genuinely improves most papers****. On average it is better than nothing for keeping the literature on track.

This has been a rather rambling piece, so let me finish it off. Reviewers and editors are human. Peer review isn’t perfect. Mistakes will make it into the permanent literature, even under the best of circumstances. A more open peer review process is one way forward.

The modern African lungfish Protopterus, probably even less tasty than it sounds. Image in the public domain, modified from Ray 1908.
What better than another lungfish to exemplify the persistence of peer review? Image in the public domain, modified from Ray 1908.

Footnotes

*Reasons for this are myriad. It is time-consuming, particularly for a matrix with 10+ taxa and 100+ characters–and that would be on the small end of many contemporary matrices. Additionally, most raw matrices do not lend themselves well to close examination. Presented with a .nex or .tnt file, it sometimes can take a fair bit of effort to get it into a format where an individual coding can be readily cross-referenced with a character list, and then cross-referenced with a piece of anatomy that may or may not be formally published. Codings are often tough to verify, particularly if particular taxa are poorly described or illustrated (I’m looking at you, short-format papers–no, a 500 pixel wide image is not sufficient to illustrate an entire skull). Reviewers are generally busy people, and it is asking a pretty fair bit to expect them to go through every coding in detail. Quite frankly, it’s sometimes virtually impossible.

**A frequent criticism of social media, blogs, etc., is that it isn’t “peer reviewed”. On the one hand, I get this–it is hard to cite or verify casual comments. Some writers adopt an unnecessarily hostile attitude, which also (rightfully) raises hackles, and makes it easier to dismiss critiques [as a side note, this also happens in private, anonymous peer reviews–believe me, I’ve witnessed shenanigans that make the most caustic blog post look like a kid’s book; that doesn’t legitimize either behavior, of course]. But, there are precious few venues where short critiques or corrections can be published. If I find a genuine coding error in a phylogenetic matrix, few if any journals will publish my correction, and even if they do I will come across as looking a bit petty for publishing on a minor coding error. I might include it as a brief note in a future manuscript, but that could be years down the line, or it might not be really relevant to something I publish. If we’re talking about a correction to something in an edited volume–good luck. And if we spent all of our time addressing every error in a formal venue, there would be little time to do new science. These sorts of corrections don’t really “count” as a real publication. It’s a no-win situation, one that plays into the hands of many who think any critique outside of “formal” literature can be ignored.

***That said, I think some degree of moderation is warranted. Not all comments are equal, and there is a threshold for a useful comment versus a non-useful comment. I also think that signed (non-anonymous) comments are important, for the context of scientific critique.

****Note that I did not say “pre-publication peer review makes papers perfect”. A rather common and annoying straw man, in my view, is “Well, peer review perfect, so we should just eliminate it.” I disagree.

Final note: Despite my opinions to the contrary, some may interpret my words as “You must do [preprints/blogging/open review/social media] and it is good in every case”. Short answer: no, that is not what I am trying to say, nor do I believe that. I’m okay with that.

Survey banner

Source: How Did That Make It Through Peer Review?

[Image credit above: AJC ajcann.wordpress.com]

Creative Commons License
Except where otherwise noted, the content on this site is licensed under a Creative Commons Attribution 4.0 International License.