Everyone is talking about the recent Montana field experiment on ideology in non-partisan judicial elections. I’m hearing even from non-political scientists, which is my measure of significance. There are three discussions happening now.
- Should the state seal have been used?
- Should researchers intervene without the consent of those targeted by the intervention?
- Should researchers intervene in politics?
I’m not interested in point (1) here, or in discussing this particular experiment at all. I’m interested in points (2) and (3) as general points.
There’s a pretty stunning amount of misinformation about there about point (2), such as beliefs that deception is never justified, that IRBs cannot approve research involving deception, and that all human subjects research requires informed consent—all of which are false.* Your IRB has procedures for determining whether a given research project requires consent or allows deception, even both (here’s Cornell’s on consent and deception). And we can argue about whether or not procedures were followed in this particular case by the IRBs in question, but there is no legal requirement that all human subjects research requires informed consent, or precludes deception.
Now there might be an ethical case against such interventions, but as a friend commented to me elsewhere, it’s a consequentialist argument: are the anticipated consequences acceptable? But that brings us to point (3), the real meat of my commentary here. I want to argue that the possibility that a research design has political consequences cannot itself render such research unethical. This applies equally to experimental interventions and to observational research.
My argument rests on an assumption that all social science research has the potential to affect political outcomes. There is nothing special about field experiments here—ask Chileans and Indonesians if non-experimental social science research affected their politics, or ask the MAD theorists and their Soviet counterparts. Moreover, the continuing handwringing by many in the political science discipline about public engagement and disciplinary relevance is precisely about using social science to affect politics. It is a peculiar position to argue that we can be advocates, but we can’t act ourselves.
If there is an argument that is particular to field experiments, it is (as Dawn Teele argues here) that field experiments cause variation in the real world as an inherent part of their design. It’s not an accident, it’s the point.
But again, I cannot support the position that surveys and ethnographic work don’t have the potential to affect politics too.** The fact that interventions are not in their design does not mean that they are apolitical, or that they do not affect the social world in which they are embedded.*** (For those of you rolling your eyes, this is not a nitpicky point, it gets to the very foundation of what about a direct intervention would make it objectionable.)
My summary point is that all social science research has the potential to affect politics. We have no general ethical obligation to avoid any research that could indirectly affect politics; we have instead a specific ethical obligation to consider the risks and benefits of any research project, experimental or not, in the field or not. I’m completely on board with any proposal to think more about disciplinary ethics and political science research, but my bet is that that would make non-experimentalists a lot more uncomfortable than they think. Do we have an ethical obligation not to cause variation in the world deliberately, even if it has political effects? I don’t see it.
But, But, and Once Again, But…
But I am prepared to concede one important point that comes out of this, one raised privately by two friends at universities that rhyme with Schmearacuse and Schmisconsin. That is, we have always conceptualized the notion of harm and benefit from treatment as an individual thing. In the medical literature: did you also get cancer from your diabetes medicine, things like that. What this large field experiment raises is the possibility of that harms from treatment are not realized at the individual level, but rather only from the collective responses of all of those treated and non-treated together. I don’t think that we have the conceptual tools to think through the collective effects of treatments in cost-benefit terms. Perhaps the bioethicists and medical ethicists have thought about this in their discussions of vaccines, epidemics, and things like that, but I’ve not seen them applied here. Perhaps that would change my mind. But if so, I’d still hazard a guess that field experiments wouldn’t be the only ones affected.
Notes
* Strictly speaking, the position on deception is an ethical one, and many people do appear to hold the view that regardless of IRBs will approve deception, it is still unethical. See Kim Yi Dionne for a discussion.
** One personal example will suffice: my own survey-based research showed that Islamist party ideology usually does not confer an electoral advantage in Indonesia. I first reported my findings in Indonesian in 2009, at an event attended by an Islamist party leader who approached me and asked me about the implications for his party’s strategy. Now that party no longer campaigns on implementing sharia law. Is that all because of my research? No…but I am certain that my research affected how that party campaigns.
*** Another personal example: I write these blog posts about ethnicity and Malaysian politics. I receive criticisms from some Malaysians that these posts themselves are reifying the ethnic logic of Malaysian politics, such that my discussing these findings at all is bad for a post-ethnic Malaysian politics (one that I myself support).
anirprof October 28, 2014
One other aspect to consider is, what happens when our internal disciplinary evaluation of what is ethical runs into what society at large considers ethical — which matters when conducting large scale social interventions. I think it is worth noting that in the Montana case, the reaction of basically everyone who is not a political scientist has been strongly negative: journalists, the overwhelming majority of comments on the blogs that have picked this up, everyone on both sides of the political spectrum on Montana, etc.
To add anecdotes to that, when yesterday I described the experiment to several non-political scientist friends, though still people who are quite interested in politics, to a person their reaction was outrage. A couple of them were attorneys (I know a bunch due to spouse’s line of work), and they thought it likely that a legal case would stand up against Bonica & Rodden for the deception and failure to register (and meet associated financial disclosure rules, etc). More importantly, though, these bleeding-heart liberal lawyers hoped MT and federal officials actually will pursue such action; that was the degree to which they thought an ethical line had been crossed.
Two things to think about: if most people _feel_ like a social science experiment causes some sort of harm merely for having been done, do we have to take that into account even if our own reasoning suggests that no such harm exists.
Second, is it sustainable to do experiments that generate this level of societal disapproval? Obviously staying inside the law is required, but is “legal but triggering outrage” workable? I imagine the system would self-correct soon enough, with laws being adjusted to explicitly prohibit such work and funding streams being taken away (cue Tom Coburn b-roll).
tompepinsky October 28, 2014
Tremendously good point here, person-whoever-you-are. Agree completely: sometimes might prudently decide not to do a research project because the public optics are just so bad even if they are ethical. This might be one of those cases.
But societal approval cannot be an actual binding argument that determines what social scientists do, right? It’s just an argument for prudence and care. There’s no actual defensible argument that we only do what passes some form of mass public argument, not that I’m aware of.
anirprof October 28, 2014
Beyond sheer instrumental prudence — don’t do a study that will get you in trouble — I do believe we should think about the potential harm of undermining public trust in elections, and of causing citizens of a state to feel that their political process was being tinkered with not by sincere advocates of political positions, but by researchers who just wanted to see what would happen. “Lab rats” is a phrase that shows up frequently in comments from people in Montana. If a large number of voters end up feeling manipulated, angry, and *less* trustful of politics than they were before the experiment, in what way are those negative sentiments and decrease in political trust not “harms”?
On the instrumental point, I think the experimentalists grossly underestimate just how negatively the broad public and elected officials will react to large scale field experiments in actual elections; that such experiments are being done just hasn’t reached that level of attention until now. I will not be at all surprised to see strong prohibitions enacted on using federal govt money (NSF etc) for them, and legislative and regulatory action against them at the state level. Maybe even rules against faculty at state universities from attempting to directly influence elections.
tompepinsky October 28, 2014
Again, I agree with both of those points. I think the backlash is going to be strong…but also that it’s not just going to be be against experimentalists, which is why I am so, so wary of the rush to indict experimental research simply because it’s consequential.
mic October 28, 2014
One thing I didn’t get so far is why they needed to send the mailer to 100,000 households. This cannot be about measuring the effect more precisely. So what would they have learned with 100,000 observations instead of, say, 2,000? Correct me if I’m wrong, but the scientific value would have been the same, without any suspicion of political manipulation.
More generally, I suspect that people make a difference between learning about politics and engineering politics. Speaking of engineering, to my knowledge engineers tend to work on small scale projects in labs (i.e. they would go with 1,000 respondents instead of 100,000). Once some new technology shows promise, they branch out to big funding agencies or to industries to scale up (i.e. do a 100,000 household project). But this is then often done in newly created joint ventures outside of universities. This distinguishes fairly clearly between research and commercialization. The part that people feel uncomfortable with here seems to be that this project looks like a commercial endeavor under a research flag.
tompepinsky October 28, 2014
My guess is that the primary scientific argument for 100,000 observations must be some sort of power calculation: they expect small effects so want big power to detect them.
Or perhaps it was a cluster-randomized experiment, with 100,000 households but a far smaller number of treatment units. Can’t tell.
Either way the concern about the sheer size of the intervention is definitely valid if we are concerned about the public optics of such an intervention.
Anonymouschimp October 28, 2014
Apart from the discussion on what is ethical (or not), it looks like Stanford IRB did not approve this project, and Dartmouth also did not confirm any IRB approval. There seems to be a bigger problem in how these researchers conduct their field experiment. See http://talkingpointsmemo.com/livewire/montana-mailer-stanford-dartmouth-settlement
Pingback: It is Ethical to Randomly Allocate Ethical Things? | Tom Pepinsky
Anonymous October 29, 2014
Tom, you’ve handled this discussion with nuance and sensitivity. While I haven’t been following all the coverage of this story by the minute, I can suggest that not all commentary that I have seen elsewhere has offered such nuance.
Although we tend to forget it, there is (or should be) an ethics to how we discuss ethics. That is, the principle of ‘do no harm’ should extend to the way that we discuss researchers and the ethics of their work.
In my view, more sympathy (than I have seen in some other forums) could, and should, be shown to the researchers in question. Beyond the interests of those individuals, such sympathy would arguably encourage further open debate over ethics, since researchers may then be more willing to bring up gray areas for discussion, without fear of being condemned by their peers.
tompepinsky October 29, 2014
Thanks, Anonymous. It’s nice of you to say this, and it’s bothered me how easily our colleagues concluded that the researchers in question are either deliberately and maliciously evil, or stupid.
Raul Pacheco-Vega October 29, 2014
I agree both with the Anonymous commenter and with you, Tom. I think the principle of “do no harm” is fundamental here. I’m coming to slowly realize that maybe the way in which these discussions are being framed at the moment are in fact doing more harm than not, second-guessing the researchers’ motives, technical abilities and intents. This is the phrase that really seals it:
I also agree with those who have suggested that we shouldn’t let this poison the well and stop people from engaging in future field experiments. I’m concerned about collective perception of field experiments and political science, and funding. But I’m sure that all this debate can and will contribute to strengthening the field, overall.
tompepinsky October 29, 2014
I’m hoping that’s true! But I fear that we are to prone to faddishness to make reasoned collective decisions. So, fingers will be crossed, and I’ll be eagerly following this.