Last week I posted on how Southeast Asia does not rate as a high priority for US policymakers. Now we also have some results, via the Monkey Cage’s spiffy new Washington Post blog, of the methodologies that they find most useful.
These bar charts are atrocious (categorical responses to multiple categorical questions—there must be a better way), and the lack of color doesn’t help. But if you work hard enough you can figure out that area studies and qualitative case studies rate the highest. Quantitative and formal research is at the bottom.
Well, that makes a big portion of what I do—quantitative analysis of politics in Southeast Asia—just about the most unpopular thing for policymakers that you can imagine. Maybe I should just report that I do area studies.
One question that should loom large in our interpretation of these results is whether policymakers can be trusted to know what sorts of research they need, and as a consequence, whether or not we should care what they report is most useful. The analogy is Nate Silver versus Karl Rove. Nate Silver uses a complex methodology to create election predictions, and it is not really scientific per se (it cannot be replicated), but it is certainly quantitative. Karl Rove insists that he’s the “expert” whose qualitative insights on how the American electorate works, and his ability to compare across elections using careful adjudication of the historical evidence, give him much more useful applied knowledge than any regression can. We know how that turned out.
So yes, I am convinced that policymakers report that qualitative and area studies research is most useful to them. But as any quantitative researcher knows: garbage in…garbage out.