Scott Gehlbach recently posted a new piece entitled “The Fallacy of Multiple Methods,” which appears in a symposium on “Training the Next Generation of Comparative Politics Scholars” the Fall 2015 issue of the APSA Comparative Politics Newsletter (sadly, not available online). Gehlbach’s post, more than the others in that symposium, has prompted several long discussions among my friends and colleagues who work in my corner of the discipline. His target is the belief that graduate students need to produce research that employs multiple methods:
The fallacy goes like this. Comparative politics maximizes its understanding of the political world when multiple methods are employed; therefore, graduate students in comparative politics should produce work that employs multiple methods.
The argument is that even if we accept that multiple methods usefully and meaningfully contribute to the advancement of knowledge, that statement does not imply that students (or anyone else) must use them all at once. It’s a simple but important intervention in a world where students feel pressured to master all sorts of methods and, as Barbara Geddes‘s contribution to the symposium suggests, this comes at the expense of substantive knowledge.
I agree with Gehlbach’s point here, but my own view is that the argument might have been pitched in far stronger terms with what I suspect are much larger implications for current practice. Are all methods logically compatible with one another?
To answer this requires an understanding of the epistemological foundations upon which different methods lie. Let me stack the deck a bit in favor of multiple methods being logically consistent with one another by stipulating at the outset that I’ll deal here only with positivist social science. No interpretivism, no post-modernism, not even any critical realism. So any incoherence between different methods does not stem from different epistemologies of truth values or beliefs about objective knowledge.
Within the positivist program, though, there are substantial differences in what it means to construct a causal explanation. The most thorough comparison of the different assumptions of different research traditions can be found in Mahoney and Goertz, “A Tale of Two Cultures: Contrasting Quantitative and Qualitative Research.” I find this article useful not because of how it groups different methodologies—I think that contrasting quantitative versus qualitative is a total red herring, but let’s stick with it for the purposes of this post—but because it does an excellent job summarizing different assumptions about what the goal of explanation is and what different theories of causality are. Their core point is that qualitative and quantitative methods can be seen as having different affinities with particular conceptualization of causality and explanation.
Now this is not to say that one cannot imagine research projects that combine quantitative and qualitative methods effectively. Lieberman’s vision of how the two can be integrated in one is one possibility, and I like to think of Ben Smith‘s Hard Times in the Land of Plenty as one particularly good example of this kind of nested analysis at work. But this type of work rests on the assumption that given a quantitative finding (in these cases, cross-national in nature) it is possible to then select a case and learn about whether the hypothesized causal mechanism is working or not. And I have been particularly convinced by this piece by Gelman and Imbens that the Rubin causal model really does imply that it is impossible to know causes of effects in a particular case.
This line of reasoning implies that integrating different methodologies ought to be a hard epistemological problem, if we really do care both about the Rubin causal model and the value of case studies. Read in the strongest terms, it would imply that no qualitative case study could ever confirm or disconfirm a quantitative result. The same is true in reverse; no cross-case quantitative test would ever confirm or disconfirm the findings from particular cases. I sometimes think about this with respect to the case studies and how they relate to the cross-national “tests” in my dissertation.
If you’ve followed me this far, then perhaps I’ve convinced you that even if we value multiple methods, it is not clear that combining them would always result in a coherent exercise. I’ll close, then, just by some conclusions from Mahoney and Goertz.
Given the different assumptions and research goals underlying the two traditions, it necessarily follows that what is good advice and good practice in statistical research might be bad advice and bad practice in qualitative research and vice versa. In this framework, it is not helpful to condemn research practices without taking into consideration basic research goals.
Misunderstandings across the two traditions are not inevitable. Insofar as scholars are conversant in the language of the other tradition and interested in exploring a peaceful and respectful dialogue, they can productively communicate with one another.
And I can’t resist this old tweet, inspired by my making a version of this argument in our comparative methods course.
@TomPepinsky – inspired by today's class #polsci #methods pic.twitter.com/XUYJ8NVo
— Germane Riposte (@NuisanceValue) March 8, 2012
Dwayne Woods October 24, 2015
Good to see recycling taking place. Ariel Ahram and Chatterjee made a similar argument against mixed methods, I believe, in the same outlet. From my own experience, Leaving the epistemology/ontology aside, it is hard to master a skill set in one methodology. Graduate students trying to do it in several have, in my experience, come up woefully short in all of them!