As a follow-up to the Data Access and Research Transparency (DA-RT) initiative which has generated quite a bit of controversy among political scientists, there is now a website for people to deliberate the issue. The site has been up for awhile, but is important for those who want to have a voice in the deliberations to participate. My DA-RT petition signing statement urged such deliberations, so please go have a look. Today, if possible, if your Sunday plans allow it.
In reading the forum postings, I can’t help but be discouraged by a lot of what I see. My position on the DA-RT policy has evolved towards opposing any kind of disciplinary-wide policy on how to present evidence. There is no policy that could be adopted that will replace discipline-wide norms of honesty, self-criticism, and openness to debate. But, I don’t see how a journal transparency policy will cause people to be skeptical of certain types of evidence. I already live in a world in which those people exist, and there are patterns across journals. I also already live in a world where things like the references section ought to suffice for most questions about evidence[*], and in which referees already continually demand clarity about how inferences are made, how evidence was gathered, and so forth. Editors already have great latitude to defer to prominent referees enforce their own standards, equitably or otherwise.
My biggest worry is that the evolving terms of the debate as in qual-v-quant and positivist-v-nonpositivist directions is having the effect of re-erecting the very barriers that I and many others have long hoped to demolish. The discursive construction of methodological divides, or something like that. This is something to be regretted—that there are grad students who think that their most important choice is about choosing a methodology rather than a question, or researchers who identify professionally by methodology and epistemology rather than substantive problem.
NOTE
[*] This is a big area of misunderstanding. Many researchers who work with historical or archival data appear to believe that the Journal Editors’ Transparency Statement (JETS) means that they will be required to submit, say, full scans of any archival documents that they use. The basis appears to be the first item in the JETS statement, that journals
Require authors to ensure that cited data are available at the time of publication through a trusted digital repository. Journals may specify which trusted digital repository shall be used (for example if they have their own dataverse). If cited data are restricted (e.g., classified, require confidentiality protections, were obtained under a non-disclosure agreement, or have inherent logistical constraints), authors must notify the editor at the time of submission…
Is it actually true that JETS means that standard references would no longer suffice? I don’t know.
wws501news April 24, 2016
I am not a core member of the DA-RT initiative but I attended most of the substantive meetings on the qualitative side where consultations were held with researchers and journal editors about everything from human subject issues to the level of consensus between the qualitative and quantitative communities. In recent years, in developing digitally-enabled “active citation” as a standard format, I think I have written more about how this technology is likely to work than anyone in the field.
Tom, I would add the following.
You are correct that there is a widespread misunderstanding in the field that qualitative transparency must or will mean archiving all your documents on some website. This is incorrect. This misunderstanding arises for three reasons: (a) the DA-RT guidance is deliberately vague, not saying much about how data transparency is to take place, because the cardinal principle of DA-RT is that research communities and journals can set their own standards, so the DA-RT people did not presume to decide this issue; (b) political scientists, even qualitative ones, naturally think qualitative transparency will be like quantitative transparency, where you do archive data, but in fact in the draft of the qualitative guidance from DA-RT, the only example mentioned was digitalized footnotes (“active citation”) not archiving; (c) as you point out, people forget that human subject protection, intellectual property law, logistics and first use concerns take precedence over transparency, so in most cases it will not be practical to archive anything anyway.
My view, as expressed in numerous recent published writings, is that the only viable default approach for qualitative work is digitalized “active citation,” that is, an appendix containing entries hyperlinks to those footnotes in an article that pertain to empirically contestable points, and each of those entries contains (only human subject permitting, so perhaps sanitized, anonymous, redacted, summarized, or even suppressed) a 50-100 words quotation from the source, an interpretive annotation that explains why it supports the descriptive, interpretive or causal claim in the main text, a full citation, and optionally (at author’s discretion and only if legal, practical and available) a scan or link to the whole document. The format would be mandatory for qualitative articles, but the extent to which it is employed (how many entries, how much quotation, what interpretation), what needs to be suppressed due to human subject, etc. would remain in the hands of the author, because I don’t think it’s a good idea to empower editorial discretion.
In essence, this is simply to say that journal articles in qualitative political science will now read more like articles in legal academia (law reviews), traditional historical or humanities journals, an exceptional journal like STUDIES IN AMERICAN POLITICAL DEVELOPMENT or INTERNATIONAL SECURITY today, or like most political science journals were 3 decades ago, with discursive footnotes, textual references, longer word-limits, and interpretive analysis. Over recent decades, with shorter word limits and “scientific” citation, journals have moved in a direction that is biased against rich qualitative work and this would redress that. The biggest advantage would be restoring this added length (it’s the equivalent of a longer word limit in layers) and richness, but it might also add to rigor and real-world relevance, I think. This ability to incorporate case study detail is one reason law, history, area studies and policy analysis, four fields in which I publish, are now more policy-relevant than main line political science. This is a very conservative “back to the future” proposal. We know it’s feasible because we have already done–and still do–it.
The disadvantages are slight. It seems like more work, but actually most qualitative scholars (especially interpretivists) want more words and local knowledge in their work. Interpretivists should especially love it, because it validates their epistemology of social science, by presenting data as narrative “causal process observations” (Brady/Collier, Goertz/Mahoney) not “dataset observations”; by recognizing the essential role of interpretation of data in the annotations, and by permitting the subjects, objects or observers of politics we cite to speak to the reader in their own words. And it has the practical advantages of: (a) leaving journal formats intact (all the changes are in the hyperlinked appendix published with the article); (b) keeping the logistical burden to a level we in political science and fellow scholars in other disciplines have already experienced and know we can handle, and (c) reducing the human subject and intellectual property problems to a manageable level.
If we simply focused on the fact that in 99% of the cases, this is what is going to be feasible, and made everything else voluntary, I think we could focus on practicalities and, as you rightly say, Tom, avoid these depressing debates among methods and epistemologies. I do think qualitative people do different things, and I think this approach takes account of that different epistemology and the different practical constraints.
Thoughts?
Andy Moravcsik
Princeton University
tompepinsky April 24, 2016
Thanks much for the extensive thoughts, Andy. There is real value here, and I am really curious to read more from scholars whose work would be most affected about how they have received the active citation proposal.
One comment that you made that I think is worth further exploration is this:
As you might expect given my post, I question the practical usefulness of a distinction between a set of things called “qualitative articles” and a set of things called “not qualitative articles.” There are many articles that are chock full of statistical analyses that nevertheless invoke descriptive, interpretive, and causal claims independently of the inferences drawn from statistical analysis. I can think of several that I have written. I would object to a standard that disproportionately falls upon the authors of articles that eschew statistics, who alone have to go to such extra lengths in providing textual evidence for all descriptive, interpretive, and causal claims because their articles are qualitative.
Another way of putting this is, if we take seriously that no statistical analysis has an objective meaning that can be communicated independently of the context in which the data are generated, then the distinction between qualitative and quantitative is harder to sustain. Those researchers who share my view that substantive knowledge of context, history, etc. makes statistical analyses better ought to share this view. This would recommend a rather more encompassing view of whose work would be subject to the active citation mandate. Has this ever come up in your conversations?
A completely separate point is that I am growing more opposed to mandatory standards that are applied across journals. (I wrote in the above post, “my position on the DA-RT policy has evolved towards opposing any kind of disciplinary-wide policy on how to present evidence.”) I am in favor of a more measured approach in which standards become accepted because procedural innovations in how we do research prove to be consequential, and the community of researchers who would be subject to those standards come to favor them for themselves. The move to require replication materials for statistical analyses, if I am not mistaken, did not begin as part of an APSA mandate. Instead, it emerged as individual journals started asking for replication materials, and as we as a community of researchers observed the consequences, we came to appreciate their importance. For a proposal such as active citation, which (at least at present) will have substantial logistical costs even if it is, as you note, a conservative proposal in intellectual terms, my view is that the case is best made by demonstrating its value through practice. On that front, looking further to seeing how this unfolds.
Thanks again for reading!
Pingback: Tom Pepinsky and Andy Moravc | Dialogue on DA-RT
wws501news April 27, 2016
Thanks for your reply, Tom. This is a very useful discussion and gives us a chance to clear our a lot of non-issues and focus on the ones that matter.
(1) NO ONE has proposed a common set of standards for all journals. So there is no need to oppose it. What has been proposed is a set of pretty vague norms that journals can sign on to voluntarily. They are self-evidently and deliberately so broad that there will be considerable difference in how they are applied; indeed, the first thing that happened was separate qual and quant norms were proposed, and even these are vague. This is as it should be. We have much variation across journals today in methods, article length, levels and types of documentation, footnotes, transparency, data requirements, etc. We live with it and that’s fine. More radical alternatives are fine, Jeff Isaac takes the explicit position in his famous critique of DA-RT that PERSPECTIVES is not about “empirical analysis” at all but about introducing new ideas, and therefore transparency in the sense DA-RT promotes it is not relevant. If that’s the considered view of a research community, fine.
(2) You are right to question a distinction between qualitative and quantitative ARTICLES. No such distinction is meant or implied by the DA-RT people. That’s a red herring.
The proper distinction is between individual qualitative and quantitative empirical research claims (descriptive, interpretive or causal). AC or any other transparency norm would apply to qualitative and quantitative scholars, i.e. those who do primarily either/or, when they advance a qualitative empirical claim.
Of course there are “qualitative” aspects of statistical analysis, like much coding. That’s not what is meant by qualitative analysis here. DA-RT thinks of qualitative analysis as Brady/Collier, Mahoney/Goertz and most people do, as characterized by relatively few cases, textual evidence, and embedding evidence as “process observations” in a narrative structure. I admit to being somewhat amused by the widespead unwillingness of quantitative scholars to make coding or initial qualitative data transparent, but it’s not my problem. The cardinal principle of DA-RT is “research communities decide.” I do not want some person who’s never done qualitative work telling me, for example, I should archive every single document in every box I examined in the British Records Office to protect against cherry picking, or second-guessing my human subject protection procedures. We qualitative scholars will do that ourselves, thank you. The price you pay is you have to let them make their mistakes.
But you raise an interesting point: one of the major effects of this is that quantitative/formal scholars who do some qualitative work may feel some obligation to do it more transparently and, therefore, thoroughly, and be somewhat more concerned about how qualitative experts will judge it, just as when I do some stats or formal work, I would feel obliged to do so by the transparency of that work. This seems to me altogether a good thing.
Indeed, contra your fears, this will impact many quantitative scholars and actually on net (contra Isaac) I suspect it may well “inconvenience” (if you want to call showing your work an inconvenience) them much more than qualitative scholars. After all, qualitative scholars have presumably already did the proper background work, they just have to show it, whereas that cannot be said of all those who do a statistical or formal analysis, then add a case study. They are, for the first time, going to have to think a bit more about making that case study more transparent, and therefore richer, more rigorous, more relevant and more open to critique by qualitative scholars (all the good things that come from transparency). You might think that’s exceptional but it’s not. We often forget that qualitative research is the most widely employed method in political science. Over 90% (you read that right, NINETY PERCENT) of scholars in IR (the sub-discipline for which I have statistics at hand) employ it, even if they primarily do qualitative or quantitative work.
(3) Of course I share your (related) view that we don’t want to impose a “standard that disproportionately falls upon the authors of articles that eschew statistics, who alone have to go to such extra lengths in providing textual evidence for all descriptive, interpretive, and causal claims because their articles are qualitative.” But I think this is a misleading characterization of this proposal (though a widespread one), for four reasons.
(a) As we have just seen, the marginal burden of adjustment will fall largely on quantitative scholars who do a little qualitative work.
(b) You seem to imply we should be tougher on the quantitative people, and this is somehow unfair, but whatever your attitude toward statistical research, there is no question that those who do it are subject to more extensive transparency norms than we are–and they benefit in being able to have much more cost-effective empirical debates, reusing data, etc.
(c) How much of an imposition is this? You say it should be bottom up. Active citation simply says we should oblige (but not really require) scholars to provide the level more empirical and interpretive richness that is commonplace or universal in history, law, policy analysis, classics, and is already present, albeit in a slightly different form, in the best political science books, political science journals, like SAPD and IS. It was even more commonplace in political science until 25 years ago. Think about it as encouraging people to adopt existing best practices.My sense is that in your average qualitative article, we are talking about 20-40 citations. In your average mini-case study, perhaps 10-15. Maybe an ambitious scholar would have 50-70. It’s not really a large number in any case.
(d) This is as much an opportunity as an imposition. Local knowledge is what we are good at, but no one knows it if we cannot show it. That’s why I have never met a qualitative researcher who did not want longer word limits, more chance to show evidence, and more debate over THEIR local knowledge–and the more interpretivist they are, the more they want it. Are you denying this? And why do they want these things? Because, over the past 25 years, political science as a discipline has become less and less favorable to rich qualitative debate in ways that transparency can help redress. Journal articles have gotten shorter, footnotes have become so-called “scientific citations”, and the traditional virtues of language, area studies expertise, historical knowledge, policy expertise, interpretive subtlety and so on are being rendered invisible, and this harder and harder to foster, recognize, engage and reward, in the core of the political science profession. I am not saying that greater transparency and the de facto elimination of word limits and scientific citations will necessarily lead to more value being placed on linguistic, historical, policy, area studies and other forms of local knowledge, and richer debates among qualitative scholars about relevant problems in the real world–as we see in fields like law, history, policy, that are much more transparent than us. But transparency is, I think, a necessary precondition for such changes. If we are going to get where Jeff Isaac wants to go–to a more relevant political science–we need to start by making the real world of politics as it is lived by observers, subject and objects, and interpretations of them we make, more immediately transparent to readers.
So where does that leave us in practical terms?
(1) Yes, these various committees should discuss what a qualitative claim based on evidence (a “contestable knowledge-based claim” in DA-RT speak) is. But in principle I think it’s pretty straightforward: it does not and should not include background literature review, theoretical citations, or background information citations, or anything you as the author do not deem to be contestable. We in political science do not and would not go down the road of law reviews in citing everything completely just for the sake of citing it. It would be limited to information, therefore, that is potentially controversial and is integral to the descriptive, interpretive or causal claim you are making–contestable in that sense.
(2) Perhaps there remains some residual grey area of background or core case study information, the status of which is unclear. My own view is that we (that is, individual research communities) should write some general guidance and see what authors come up with, when they have norms and transparency. We can reassess after 5-10 years.
Does that make sense? The more we can set aside big, scary, but essentially uncontested or irrelevant issues and focus on these types of details, the better we can push forward the interests of qualitative and interpretive scholars in the field.
I leave you with one last thought. You want bottom-up change. Transparency is coming. It’s everywhere around us: on the web, in the media, among other disciplines, think-tanks, governments and bloggers. We’d be shocked (rightly so) nowadays if such materials were not hyperlinked to sources. My kids don’t even understand material presented any other way. The truth is that political scientists are way behind the curve here: less transparent than those we study. Like James Scott and others, I think our job is to generate the kind of information that reveals what really goes on in politics, and that, in the end, this task strengthens the weak and the masses against the few and the rich. But that information needs to be rich and it needs to be credible, or we lose any comparative advantage we have.
Andy Moravcsik
tompepinsky April 27, 2016
Thanks for the follow up. I agree that this is useful, and in the spirit of putting to bed the non-issues, let me just respond quickly to some of what you’ve written.
Whatever the merits of this debate, I think that it might be helpful, for other readers, to call attention to your point here. If no one is proposing a common set of standards, and those standards are actually just “pretty vague norms”, then DA-RT isn’t such a big deal to me. I feel neither the need to defend it or oppose it.
For what it’s worth, my very first reaction to DA-RT back in the fall was “hmm, pretty anodyne stuff, hard to disagree with the principles, won’t affect me in any way.” I have been struck by how many OTHER researchers feel that they WOULD be affected by it. I signed the DA-RT petition because I wanted more debate about costs and benefits (here’s an active citation for you! https://tompepinsky.com/2015/11/05/the-da-rt-petition/)
There is a large community of people who are strongly opposed DA-RT because they think it really, really matters. This is just a fact. Maybe they should read your explication of what the stakes actually are, might tamp down some of their criticisms.
It sounds like you and I agree here. But my comment was just actually a response to your previous post: “The format would be mandatory for qualitative articles.” If the proper domain is claim not article, then we agree.
Agree! The goal of of better case studies in mixed-methods scholarship is certainly good.
Again, if the domain where this practice would operate is the claim, not the article (or the scholar…I am not a “quantitative person” or a “qualitative person” or a “mixed method person”), then my point wouldn’t apply anyway. A question here, though: do you suspect that the resistance to DA-RT from scholars who do self-identify as part of a methodological tradition might have something to do with the idea that they fear that they are not doing research that would withstand the kinds of transparency that a practice like active citation would encourage? In that case it actually would be the case that the marginal burden of adjustment would indeed fall on qualitative work.
Statement’s like DA-RT are most useful if they actually matter for how research is conducted. If DA-RT is vague enough allow for such varied interpretation, then I can’t get very excited about it either way.
Agree on this: I like the emergent norm that journals and readers expect replication materials for statistical analyses. I don’t want to be tougher on “quantitative people” but I also don’t think that replication materials are hard to compile if you do all your work expecting to replicate it later.
I do get confused by some of the regulatory terminology you use here: oblige versus require, mandatory or not. I do think that your point that the burden for active citation is supposed to be fairly light is something that most people don’t recognize. Would be worth sharing it further.
I don’t have any immediate reaction to this aside from to say that it’s well said, and I do agree that word limit nonsense discourages deep engagement with historical or ethnographic materials (more active citation! https://tompepinsky.com/2013/07/23/journal-reform-wish-list/)
I like the idea of communities developing norms for how to confront citation issues on their own. I’ve had productive conversations with as part of the CPS editorial collective, for example. That seems to me the proper domain for such discussions. I am skeptical that some broader community, like an APSA organized section, ought to be doing it.
What I want to return to is this: I don’t identify as a qualitative scholar, but so many researchers who do so identify are the ones who so stridently resist even a voluntary set of pretty vague norms like DA-RT. Why would that be? I’m not the one to answer on their behalf.
Ha, my kids are still a bit too young but I bet they will feel the same way. I regularly google passages from books that are on my shelf because I am too lazy to stand up to get them. I am all for transparency for the same reasons you are. Still, my invocation of Scott would go in a different direction, to be skeptical of the ability of large organizations to generate standards to which we are obliged to follow.