The new editorial team of the Journal of Politics–one of the premier disciplinary journals in the field of political science–has announced that going forward, all experimental research submitted to the journal must be pre-registered:
Pre-registration: authors who want to submit manuscripts containing original experimental work, including laboratory, field, and survey experiments are required to submit proof of study/design pre-registration with one of the available research registries (e.g., EGAP, RCT, Open Science). Pre-registration of other types of research design is very much encouraged. The submission of unregistered laboratory, field, and survey experiments will not be accepted. This policy will be phased in: For manuscripts submitted in 2021, authors need to justify in a letter to the editor why the study was not or could not be pre-registered.
There’s lots of chatter on Twitter about the implications of such a policy. In my view it runs the risk of creating a series of inequities that will make it harder for those without resources to publish experimental research, especially given how common follow-up studies are.
Setting this particular issue aside, I have two general concerns.
The first is, I am also unclear as to why this policy would be obligatory for experimental research and not for non-experimental research. I say this as someone who has 2.3 million observations from the Indonesian census sitting on my computer (through IPUMS). You want an argument that having a phone line increases your propensity to speak Indonesian, and that that effect varies by the number of family members living in your household? I can cook up those significance stars for you in 20 seconds. If we truly believe that we need to police p-hacking through obligatory pre-registration, then I do not understand the substantive argument why this would be obligatory for experimentalists but not for those working with observational data.
Now, there may be a practical argument here that pre-registering observational studies is impossible, because we can never verify that pre-registration was done before seeing the data (except for in rare circumstances such as this one). But that’s not even correct! This practical objection to pre-registering observational research only applies to the analysis of secondary data. Why not insist on pre-registration of survey data analysis? Elite interviews?
The second concern that I have is that we actually do not have a common disciplinary understanding of what constitutes a pre-registration. I will not link to examples here, but I have been shocked by what editors believe “counts” as a pre-registration, and the inequalities that emerge as a result. A vague statement that “we will collect the data, and then we will test the hypothesis that X causes Y using regression” suffices for articles that currently appear in top disciplinary and general science journals. Contrast that the “earnest/completist” version of pre-registration that many of us follow, in which we announce not only the hypothesis but also the coding rules and statistical analyses, even providing the actual computer code that we will run once the data is there.
Insistence on pre-registration for experiments pushes us back to the antecedent question of whose standards must be followed for ascertaining that a study has been pre-registered. It introduces opportunities for editorial and review discretion as a result. Is my incentive as an author to pre-register only the main analysis, and then to announce that any subgroup/heterogenous treatment effects analysis that I might cook up later is exploratory? Will I get the benefit of the doubt from the referees if I just say that? Will a PhD student get the same benefit of the doubt? Who gets to say how much exploratory research is too much?
Surely there are other reactions out there to this particular editorial policy of requiring that experimental research only be pre-registered. But these two jump out immediately to me as reasons to be careful in implementing prospective rules about how research must be conducted.
Stepping back, I generally find efforts to implement hard and fast rules to discourage p-hacking and p-fishing to be misguided. These problems are hard to solve, but I do not know a model of the scientific process that works through rigid pre-registration standards (a point I make here). I’d prefer to embrace a Bayesian approach to how we evaluate research when p-fishing is possible, a topic I first touched on here but which Andrew Little and I addressed formally here. In the JOP!
P.S. In the course of reading the Twitter chatter about the JOP‘s new editorial team, I saw some criticism of the new team for lacking an Associate Editor who covers political theory. I think it a terrible mistake that a premier disciplinary journal would not have a political theory editor. Hopefully I’m misunderstanding this situation, or that it will be rectified quickly.