My friend Nate Jensen has written a brutally honest and really important post about the peer review process. Every grad student and assistant professor should be required to read it. I leave it as an exercise for the reader to write down a timeline of when a paper ought to be submitted if you expect to have a good publication by a particular date.
A silly personal note: on one of Nate’s papers that endured multiple submissions, yours truly served as a reviewer at multiple journals. The manuscript certainly improved over time, but the whole process ended up being rather ridiculous. On the last review, I wrote “I have essentially run out of criticisms to make of this paper and it is time for it to be published.”
(Nate and at least one of his co-authors know that this was me—they figured it out on their own and we have talked about it since—so I am not betraying any confidences here. I knew the authors because we share work sometimes. How’s that for anonymous peer review?)
A not-so-silly personal note: I can tell similar stories. This paper was rejected four times. This paper was rejected at the journal that commissioned it, then desk-rejected at a different journal, before being accepted. This paper was desk-rejected at an Asian affairs journal, but no one informed me for a year. Which, as you can imagine, really grinds my gears. As much as my feelings have been hurt, I have never once complained to an editor, because I have never been treated unfairly (the last one was not the editor’s fault).
A final note, before returning to work on revising a paper that recently earned a fair rejection at a very good journal: I appreciate Nate’s honesty and his willingness to reveal something about how the process looks. As I tweeted yesterday,
— Tom Pepinsky (@TomPepinsky) September 14, 2013
But I do think that it raises some questions that are worth some reflection.
- What kind of model of science are we following when appearing in print/surviving peer review is such a capricious process? Especially if the reviews are about framing the contribution rather than the technical details of the analysis? Suggests to me that political science may aspire to be a science, but the work of political scientists is really much more like a craft. Of course, nothing special about political science here, but we ought to be clear about it.
- I wonder how many of Nate’s R+Rs and rejections were justified by a referee’s request for new specifications of an empirical model. We talk about “researcher degrees of freedom,” but in my experience “referee degrees of freedom” are just as problematic, especially if we go with the position that if the referee can conceive of specification in which a coefficient of interest does not cross an arbitrary line of statistical significance, then the results are not “robust.” One paper of mine—at the request of multiple rounds of reviewers—has been subjected to four times as many robustness tests than there are observations in the dataset.
Am I missing anything?