Category: Research

  • Methodology in Southeast Asian Studies (Part 2 of 2)

    In my previous post, I wrote about a peculiar asymmetry that I observed between the harder social sciences and the humanistic social sciences at a conference on methodology in Southeast Asia. The political scientists and economists, on the whole, were quite willing to engage critically with the anthropologists and cultural/ethnic/global studies folks. The same is not true in reverse: despite my best efforts to start a debate with the traditional area studies crowd, it just seemed to stall.

    Asking Questions, Being Critical, Feeling the Flow

    I entertained five possibilities for why this might have been the case: representatives from the harder social sciences are (1) numerate, (2) obviously wrong, (3) American, (4) male, or (5) jerks. I’m not dismissing any of these out of hand, but my suspicion is that it lies in something else, more fundamental to the disciplines. (Hearing from both political scientists and non political scientists that my experience talking about Southeast Asian studies in Freiburg is not unique to either SEA or Freiburg only reinforces this suspicion.)

    I think the difference has to do with disciplinary norms about fallibility in social research.

    By fallibility, I just mean the fact that the findings can be wrong and that it’s often hard to tell. There’s nothing unique to social research in this regard, of course. But some scholars of society seem to be uniquely distressed by the possibility that we could be wrong. It’s not just that we’re bad researchers, but also somehow bad people too. (It’s this sort of idea that generates the triumphalist pronunciation that someone’s research is “reductionist” or “hegemonic” or “Orientalist,” which take on the tone of personal insults.)

    My claim is that there are two different ways in which social scientists address the fallibility of their own research.

    1. The first is the standard normal science way: trying one’s best to make all research procedures clear, objective, and replicable. (Many historians, incidentally, fall into this camp.) It involves an understanding that the point of the audience is to ensure that the presenter is not pulling a fast one. Your audience should be able to ask you questions about your research, and you should be able to answer them. The message here is “look, if you did this research yourself you’d come up with the same results too.”
    2. The second is the approach embraced in most of cultural anthropology and many of the thematic studies corners of Southeast Asian studies. In essence, it is to redefine the enterprise of social research in such a way as to make fallibility irrelevant. Rather than representing the social world as it is, or presenting a framework for organizing observations about the world, the task is to reflect upon the meanings (overt or hidden) that make the task of social research difficult. The message here is “look, you can’t do this research. And honestly, I’m not sure that I can either. So let’s forget about truth or accuracy. There’s no point trying to nail down the objective facts here. Here’s what I felt.”

    This matters, in turn, because the norm of how you think about the fallibility of your own research shapes how you engage with the potential fallibility of other people’s research. When a political scientist presents a scatterplot, s/he is already thinking about how to face questions about its meaning. In turn, when s/he sees someone else’s research, the thought is “how do I know that this is right?” Same thing when an economist hears a cultural studies presentation which claims to represent, say, what popular culture is like in Vietnam. The instinct is to immediately think something like, “well, how do I know that this is correct? How did we get a representative sample? What does popular culture mean? How does it vary?” In short, for the harder social sciences, we think that our job as an audience is to knock the presentation around to see if it holds up.

    Not so, I think, with the humanistic social sciences. If you start with the position that you yourself are incapable of faithfully recording or interpreting the social world as it “really is” then you have no reason to believe that anyone else can either. When witnessing an economist present corruption data, I can only imagine what a critical theorist thinks. There is really no ground for interaction, because the entire enterprise of believing that one says true things about the world is a mistake. It’s a best a peculiar thing that economists do.

    One comment made by a political scientist at the end of the conference really nails this. (Sorry, no names here, but it wasn’t me.) He said, “what I’d really like is a guidebook that tells me what counts as good anthropological research in Southeast Asia. I’m not an anthropologist, so how do I know it when I see it? What standards do you use?” The fact that no one would answer him—and that no one could even point him to a place where anthropologists have been debating the answer—speaks volumes.

    I don’t think that it was always this way. I wasn’t around in the1960s and 1970s, but I think that back then the debates across the disciplines in Southeast Asian studies really were more common, and likely more productive, with contributions from both sides. We’ve lost something. If we take interdisciplinarity seriously, which is what we’re supposed to do, we ought to restart the conversation.

    So here’s a plea—a cry into the wilderness perhaps, but I make it nonetheless. Can we find a forum for real dialogue across disciplines on methodology in Southeast Asian studies? An AAS panel, a special edition of JSEAS, something like that? I see the wonderful Freiburg conference last week as actually crystallizing for me just how much further we need to go. This will require some representatives of the humanistic social sciences (you know who you are) who are willing to engage directly with the harder social sciences on their own terms. I’ll volunteer to be the representative from mainstream political science. Who’s with me?

  • Methodology in Southeast Asian Politics (Part 1 of 2)

    The recent conference on Methodology in Southeast Asian Studies at Uni Freiburg was a lot of fun and a good learning experience. It was also revealing about some basic differences among the disciplines in which Southeast Asianists work. This calls for a big, two-part post on methods in the disciplines. Few of these are my own ideas alone—I learned a lot from the other participants, and the conversations helped me to think about these issues—but won’t use other people’s names in order not to implicate them for what I’m about to say.

    The basic contrast that emerged was between the hard social sciences (political science and economics) and the more humanistic ones (anthropology, global/cultural studies, area studies). History was not much represented, but to the extent to which it was, it acted like a humanistic social science. The specific point of contrast was in cross-disciplinary engagement: The political scientists especially seemed far more comfortable engaging outside of their discipline than the more humanistic social scientists. As an indicator of the divide, when the anthropologists, for example, gave presentations, the political scientists would ask questions. When the political scientists and economists gave presentations, the only questions they received were from other political scientists and economists.

    This matters a great deal to me because I see the point of interdisciplinary conferences like this to be to encourage  debate and discussion of the very foundations of the way that we study Southeast Asia. I claim that the goal of Southeast Asian political studies is to make true statements about how politics actually works, why it works that way, and what the consequences are. I gave a presentation in which I argued that the best way to figure these things out is to compare things, that qualitative and quantitative methodologies are both equally suited to doing this, and that Southeast Asia is not that special. It somewhat irks me that I didn’t get a comment on this argument from anyone in cultural studies, even though I know that people in that room disagreed pretty fundamentally with these perspective and even though I deliberately tried to provoke a discussion by exaggerating my position for effect. It also irks me that the representatives of experimental methods, comparative historical analysis, quantitative methods, and formal theory in Southeast Asian political studies similarly received no comments from outside our discipline (except for from economists). As much fun as we had talking to our likeminded friends and colleagues, we didn’t write presentations for them.

    It’s actually the asymmetry that is most striking. I was taken aback by how openly the political scientists in the room (not just me) were willing to engage at a pretty fundamental level with the presentations by scholars working in history, anthropology, cultural studies, and so forth. I mean, to ask critical but not hostile questions, or to seek clarification on unclear points.

    So what’s going on here? Here are five potential explanations for why the PS and econ representatives seemed so much more engaged with their counterparts from other disciplines.

    1. Numeracy. We often talk about literacy as this basic skill that students must have, but numeracy is just as important. It could be that the more humanistic social scientists simply could not understand our presentations, which (despite our best attempts to simplify) assumed an understanding of things like functions, correlations, treatment effects, Boolean algebra, equilibria, and so forth.

    2. Disdain. It might be that the problem was that our audience actually understood our presentations perfectly, but that they considered our arguments so obviously wrong as to not even warrant a response.

    3. National academic culture. For better or for worse, there was a pretty clear divide between the economists and political scientists (mostly U.S. trained and employed in U.S. universities) and the others. Maybe those employed or trained in the U.S. just have a more aggressive and confrontational style.

    4. Gender. Also for better or for worse, there was a pretty clear gender divide between the two groups, although the overlap between gender and discipline was not perfect. Maybe men just have a more aggressive and confrontational style.

    5. Personality. It’s possible that there is some unobserved factor—call it personality—that explains the differences across groups. Maybe aggressive and confrontational people just sort into the harder social sciences, which explains why our group was more vocal, while more introspective or reticent people sort into the humanistic social sciences.

    These factors may explain some of what I saw in Freiburg. However, I don’t think that any of these things really capture the origins of this divide between the hard and the humanistic social sciences in Southeast Asian studies. Stay tuned for more later.