I’m just back from the 2012 meeting of the International Political Economy Society, held at UVA this past weekend. IPES is my favorite disciplinary conference, by far. It is full of interesting papers, most of which are actually complete; people actually attend the panels and pay attention; and the short-presentation format is a great way to generate feedback. I also see tons of new friends there too. IPE is one subset of my academic identity (the others are comparative politics, Southeast Asia, and Islam) and I wish that there were similar conferences for all of them. (For Islam, one does exist.)
The striking observation from this year’s conference is that there is an increasingly clear methodological divide among the participants. Outsiders might find this surprising: aren’t they all positivists? Yes, but that obscures the fundamental disagreements about how to do empirical IPE. Simplifying greatly, there are three centers of gravity, each represented in abundance.
- Generalists. Their approach has been the backbone of modern IPE for the past two decades. It is characterized by a standard methodology (time series cross section regressions with country-year observations), and it privileges general findings over specific ones. The objective here is often to see if there is general support for a theoretical argument across the widest possible range of observations and in as general a context as possible.
- Experimentalists. Their method is a disruptive innovation for modern IPE. It too is characterized by a standard methodology: the experiment, usually on individual-level data collected through surveys or in the laboratory. It privileges the control that experimental methods provide, and values unbiased estimates of treatment effects over the other things that empirical research might produce. The objective is usually to see if hypotheses are supported at the individual level.
- Opportunists. These are a grab bag of various approaches which are united by their deliberate focus on particular events that have implications for how we understand the global economy. The methods are eclectic, ranging from surveys to limited cross-national or within-country statistical tests, even to historical or archival work. My three favorite examples from this year are Meredith Wilf’s paper on Basel III, Lawrence Broz’s paper on the Fed as a global lender of last resort, and Stefanie Walter’s paper on adjustment in Eastern Europe. There are many others; these three clearly reflect my personal interest in international finance and financial crises. Opportunists often have significant country knowledge. Opportunists rarely claim to be providing a definitive test of any grand theory, or to be characterizing any phenomenon across time and space, but they are very concerned about identification.
Here’s the thing: even though these are all empirical, these are very different ways of doing IPE. And in the hallways, over dinner, and over drinks, I heard many rumblings about all three. Let me put it crudely. Do we want unbiased estimates of imaginary things (experimentalists), or estimates of real things that are biased in an unknown direction to an unknown degree (generalists), or careful studies of real things that only a few of us care about (opportunists)?
Full disclosure: I am probably best understood as an opportunist. My colleagues are probably tired of me talking about their papers would be better if they used observational data about these neat things that happened once in maritime Southeast Asia.
As a discipline, IPE would profit from a full and open discussion of these divides. I especially think that we need to discuss the disruptive innovation of experiments. I would love to see a special issue of an IPE journal be devoted to the looming methodological divides in empirical IPE research. Something like the discussion of the so-called Transatlantic Divide in the Review of International Political Economy, or Thomas Oatley’s critical discussion of David Lake’s Open Economy Politics (OEP).
I hasten to add that the critique of experiments goes deeper than simply whether or not we should privilege unbiasedness over all else in empirical work. Or what to do when we cannot run experiments on aid, trade, the Cold War, or democracy. It goes to the heart of what it means to find evidence that something affects individual support for or against a hypothetical policy that deals with the international economy. This matters rather more for IPE than for many topics in comparative politics or development economics, where similar debates about experiments continue. In IPE research, do findings about individuals contribute to public opinion research? Or are they contributions to our theories and models of the global political economy? If the latter, how would we know? Oatley’s critique of OEP, for instance, presents a strong challenge to experimental work in IPE because it reminds us that many system- and meso-level theories about the global economy do not have individual-level implications that could be found among survey respondents. It is a category error even to look, like trying to calculate the derivative of morality, or measuring the temperature of purple. If this is true, then experimental methods are simply unable to study important phenomena within the domain of IPE. (I’ve made related arguments before in my discussions of microfoundations here and here.)
This isn’t to pile on the experimentalists. Opportunists like me tend to care a lot about internal validity, and are generally unable to comment on the representativeness of their findings. (I presented about American decolonization of the Philippines, and immediately got pushback that my approach was wholly unhelpful for understanding literally every other instance of decolonization.) Generalists have the most data, but if something cannot be coded across countries and across time, then it cannot be studied using the preferred methodology.
It is worth remembering that IPES is the closest thing to the “intellectual center” of OEP that we can find. If this is true, then there is perhaps much less agreement about what constitutes the core of OEP and its methodology than many observers (both within and without) would believe. Our community should talk about it; not in the hallways, but rather in the journals, and explicitly rather than as an addendum to any particular research project.
Of course, all of what I’ve written here should be subject to debate and dispute, by participants and observers alike. I’m open to all amendments and challenges. Comments open below.