I remember exactly where I was and what I was doing the first time I saw Gerber, Green, and Kaplan‘s “The Illusion of Learning from Observational Research” (PDF): eating some peanut butter cookies in the back of a seminar room filled with august political scientists discussing methodology and the study of politics. I remember the reaction being pretty stark: “OK, here it is, the argument that we should all do experiments.” Like many pieces of this sort, I suspect that the Gerber et al. piece has been cited more than it has been read. Also like many pieces of this sort, the title does not help. The essay considers the problem of learning when confronted with an experimental result and an observational result subject to bias, and also asks how one would optimally allocate finite resources between research of these two types. The paper was subsequently published as part of the volume Problems and Methods in the Study of Politics.
I recently finished an essay with Andrew Little that argues that learning from biased research designs is not an illusion. We argue instead that we can reformulate this challenge as a Bayesian learning problem, analogous to many formal theories of learning in the social sciences. The key to our argument is insisting that researchers do have (informative and often non-neutral) prior beliefs about both causal effects and the bias in observational research designs. One provocative implication of our argument is to suppose otherwise, that you are unwilling to specify prior beliefs about causal effects or bias. If that’s the case, as we note, then it follows that
no result – hugely positive, hugely negative, or zero – would be more or less surprising to you.
We clearly don’t live in a world where researchers have no prior beliefs. Our paper shows us how to think through the problem of learning from observational research when we recognize that we do have those beliefs.