How Diverse and Representative is the APSA Annual Meeting?

Manels” are a hot topic right now in discussions of academic conferences and workshops, and especially in political science. Conference panels and workshop programs are self-contained sites of social interaction among academics, and when the roster of a panel or a workshop contains only men, it both reflects the absence of diversity within our discipline and sends a negative signal about “who does what work,” “who is prominent,” and “whose research is valuable.” Some institutions have banned manels, and some scholars refuse to participate in manels.

Avoiding manels—and embracing both gender and other forms of diversity—is ultimately the responsibility of those who create conference programs and workshop panels. Sara Goodman and I worked on this together for the Comparative Politics division of APSA 2017, and found that it was easier on some dimensions than we expected (very few manels were submitted to us), and harder on others. In concert with the Diversity Hackathon that APSA held this past year, we obtained data on gender and other identity characteristics from the 2017 and 2018 annual meeting programs, and also sought feedback from the other 2017 program chairs about their experiences.

We have written up our analysis and reflections in a new paper entitled “Gender Representation and Strategies for Panel Diversity: Lessons from the APSA Annual Conference.” Comments, as always, are welcome.

Uncooperative Survey Experiments

Kieran Healy recently tweeted that

The tweet references a new paper by Allan Dafoe, Baobao Zhang, and Devin Caughey that shows that survey respondents,

when presented with information about one attribute, update their beliefs about others too. Labeling a country “a democracy,” for example, affects subjects’ beliefs about the country’s geographic location.

This is a problem because if we want to estimate the effect of attribute T on some survey response by randomly presenting some subset of respondents with different values of T and holding everything else about the survey prompt constant, it better not be the case that respondents change their beliefs about other features of the survey prompt depending on what particular value of T they encounter.

Healy’s tweet references the linguist Paul Grice‘s notion of implicature, which reminds us that what we say does not nearly capture what we mean and underlies the entire subfield of linguistics called pragmatics.

Implicature serves a variety of goals beyond communication: maintaining good social relations, misleading without lying, style, and verbal efficiency.

I do love a good linguistics reference applied to a political methodology problem, and it strikes me that thinking more broadly about what Grice called the “Cooperative Principle” can help us to put a finger about what can seem so artificial about survey experiments.

The Cooperative Principle says contribute what is required by the accepted purpose of the conversation. There are various different subparts of this principle, which Grice termed the Maxims of Quality, Quantity, Relation, and Manner.

  1. The maxim of quantity, where one tries to be as informative as one possibly can, and gives as much information as is needed, and no more.
  2. The maxim of quality, where one tries to be truthful, and does not give information that is false or that is not supported by evidence.
  3. The maxim of relation, where one tries to be relevant, and says things that are pertinent to the discussion.
  4. The maxim of manner, when one tries to be as clear, as brief, and as orderly as one can in what one says, and where one avoids obscurity and ambiguity.

When we take part in everyday conversation, we naturally try to accomplish these things, and we notice when people do not. For example, when someone overexplains something (testing the maxims of quantity, relation, and manner) it can make us suspicious. When someone responds to the question “how good is this cheeseburger?” with “it does look really tasty” (flouting the maxim of quantity) we are apt to conclude that the cheeseburger is likely not very good. It’s helpful to think about how humor works by deliberately flouting some of these maxims.

Apply this, then, to survey experiments, such as priming or information or conjoint experiments. Most survey questions are designed to elicit truthful responses by appearing as natural as possible. Survey experiments make it difficult to do this. For a good illustration of how, I will use myself as an example. In 2012 I published a paper with Bill Liddle and Saiful Mujani in which we used a survey experiment to tease out why Islamist parties in Indonesia are more popular than non-Islamist parties. Our survey prompt was

If there were a candidate for president from a Pancasila‐based party/Islamic party wishing to implement Islamic law, and you believed that/were unsure if that party’s economic policies would/would not develop our economy and increase the welfare of the people, would you vote for him or her?

The structure of our question allowed us to separate party ideology from beliefs about economic platform, which was our goal. But imagine you are a survey respondent facing this question. I worried then—and I worry now—that our respondents would wonder about why we are explaining both of these things at the same time. If you assume (as a respondent) that we are being cooperative, then each of these dimensions is pertinent, and that is good for us. But it might be artificially highlighting a distinction that is not relevant if we were to have asked “do you support party X?” Worse yet, it might raise respondents’ suspicions that we are being cooperative at all, suggesting that “the accepted purpose of the conversation” (which the Cooperative Principle presupposes) is not shared by researcher and respondent. Which, to be fair, it was not.

I think the Cooperative Principle every time I fill out a survey by political scientists in which I’m asked a question about my willingness to support, say, a female academic in Australia who supports the two-state solution and uses formal models.

One way to think about the Dafoe et al. paper that started this discussion is to imagine that the goal of a survey experiment should be to fulfill the Cooperative Principle, and to think about ways in which the tools we use stand in the way of that goal.