Category: Research

  • The Ethics of Intervention

    Everyone is talking about the recent Montana field experiment on ideology in non-partisan judicial elections. I’m hearing even from non-political scientists, which is my measure of significance. There are three discussions happening now.

    1. Should the state seal have been used?
    2. Should researchers intervene without the consent of those targeted by the intervention?
    3. Should researchers intervene in politics?

    I’m not interested in point (1) here, or in discussing this particular experiment at all. I’m interested in points (2) and (3) as general points.

    There’s a pretty stunning amount of misinformation about there about point (2), such as beliefs that deception is never justified, that IRBs cannot approve research involving deception, and that all human subjects research requires informed consent—all of which are false.* Your IRB has procedures for determining whether a given research project requires consent or allows deception, even both (here’s Cornell’s on consent and deception). And we can argue about whether or not procedures were followed in this particular case by the IRBs in question, but there is no legal requirement that all human subjects research requires informed consent, or precludes deception.

    Now there might be an ethical case against such interventions, but as a friend commented to me elsewhere, it’s a consequentialist argument: are the anticipated consequences acceptable? But that brings us to point (3), the real meat of my commentary here. I want to argue that the possibility that a research design has political consequences cannot itself render such research unethical. This applies equally to experimental interventions and to observational research.

    My argument rests on an assumption that all social science research has the potential to affect political outcomes. There is nothing special about field experiments here—ask Chileans and Indonesians if non-experimental social science research affected their politics, or ask the MAD theorists and their Soviet counterparts. Moreover, the continuing handwringing by many in the political science discipline about public engagement and disciplinary relevance is precisely about using social science to affect politics. It is a peculiar position to argue that we can be advocates, but we can’t act ourselves.

    If there is an argument that is particular to field experiments, it is (as Dawn Teele argues here) that field experiments cause variation in the real world as an inherent part of their design. It’s not an accident, it’s the point.

    Source: Teele (2014, p. 119)
    Source: Teele (2014, p. 119)

    But again, I cannot support the position that surveys and ethnographic work don’t have the potential to affect politics too.** The fact that interventions are not in their design does not mean that they are apolitical, or that they do not affect the social world in which they are embedded.*** (For those of you rolling your eyes, this is not a nitpicky point, it gets to the very foundation of what about a direct intervention would make it objectionable.)

    My summary point is that all social science research has the potential to affect politics. We have no general ethical obligation to avoid any research that could indirectly affect politics; we have instead a specific ethical obligation to consider the risks and benefits of any research project, experimental or not, in the field or not. I’m completely on board with any proposal to think more about disciplinary ethics and political science research, but my bet is that that would make non-experimentalists a lot more uncomfortable than they think. Do we have an ethical obligation not to cause variation in the world deliberately, even if it has political effects? I don’t see it.

    But, But, and Once Again, But…

    But I am prepared to concede one important point that comes out of this, one raised privately by two friends at universities that rhyme with Schmearacuse and Schmisconsin. That is, we have always conceptualized the notion of harm and benefit from treatment as an individual thing. In the medical literature: did you also get cancer from your diabetes medicine, things like that. What this large field experiment raises is the possibility of that harms from treatment are not realized at the individual level, but rather only from the collective responses of all of those treated and non-treated together. I don’t think that we have the conceptual tools to think through the collective effects of treatments in cost-benefit terms. Perhaps the bioethicists and medical ethicists have thought about this in their discussions of vaccines, epidemics, and things like that, but I’ve not seen them applied here. Perhaps that would change my mind. But if so, I’d still hazard a guess that field experiments wouldn’t be the only ones affected.

    Notes

    * Strictly speaking, the position on deception is an ethical one, and many people do appear to hold the view that regardless of IRBs will approve deception, it is still unethical. See Kim Yi Dionne for a discussion.
    ** One personal example will suffice: my own survey-based research showed that Islamist party ideology usually does not confer an electoral advantage in Indonesia. I first reported my findings in Indonesian in 2009, at an event attended by an Islamist party leader who approached me and asked me about the implications for his party’s strategy. Now that party no longer campaigns on implementing sharia law. Is that all because of my research? No…but I am certain that my research affected how that party campaigns.
    *** Another personal example: I write these blog posts about ethnicity and Malaysian politics. I receive criticisms from some Malaysians that these posts themselves are reifying the ethnic logic of Malaysian politics, such that my discussing these findings at all is bad for a post-ethnic Malaysian politics (one that I myself support).

  • You Don’t Come into My Journal, Drop a Causal Inference Challenge, and Leave

    dojo2

    Martin Gilens and Benjamin Page have a major new piece on the nature of American democracy in the latest issue of Perspectives on Politics. Perspectives comes straight to my mailbox so I always browse it, but this article caught my eye because (1) it’s important and (2) its finding that economic elites and interest groups explain policy action accords with my own subjective beliefs about “how American democracy really works.” From the abstract:

    Multivariate analysis indicates that economic elites and organized groups representing business interests have substantial independent impacts on U.S. government policy, while average citizens and mass-based interest groups have little or no independent influence. The results provide substantial support for theories of Economic-Elite Domination and for theories of Biased Pluralism, but not for theories of Majoritarian Electoral Democracy or Majoritarian Pluralism.

    Yet what caught my eye and sponsored this post is a quote on pages 572-3.

    As noted, our evidence does not indicate that in U.S. policy making the average citizen always loses out. Since the preferences of ordinary citizens tend to be positively correlated with the preferences of economic elites, ordinary citizens often win the policies they want, even if they are more or less coincidental beneficiaries rather than causes of the victory. There is not necessarily any contradiction at all between our findings and past bivariate findings of a roughly two-thirds correspondence between actual policy and the wishes of the general public, or of a close correspondence between the liberal/conservative “mood” of the public and changes in policymaking. Our main point concerns causal inference: if interpreted in terms of actual casual impact, the prior findings appear to be largely or wholly spurious.

    What motivates this comment is their finding that mass public opinion predicts policy change in a bivariate regression-type analysis, but when controlling for the preferences of the richest people and of narrow interest groups, that relationship disappears.

    I believe that the relationship they report is accurate, and moreover, that their description of the underlying structure politics that that relationship suggests is actually correct (more or less). But I do not think that this kind of statistical analysis shows it, or that the causal inference language that I bolded above is appropriate.

    Why? Because these correlations do not correspond to causal questions of the type “what is the effect of an change in mass public opinion on the likelihood that a bill is passed?” Think about it: just what does “actual causal impact” mean? It cannot mean conditional correlation, which is what we are seeing. It must mean something counterfactual. The authors are presumably alluding to the possibility that there is a complex, perhaps unobservable, relationship between mass public opinion and elite opinion/interest group behavior. Perhaps “in the wild” there is little independent variation between mass opinion and the other two, so that it’s unrealistic to think that we could conceptually separate the two. Throughout the text they suggest this is true. But we cannot back out from what they have shown here any conclusion about the causal impact of mass public opinion.

    As a further note: even if we didn’t care about causal inference, we should not test competing hypotheses—be they nested or non-nested—through big multiple regression models. We have a range of better procedures for doing that.