I am really pleased to see that our series on Women in Methodology is already generating some much-needed attention to and discussion of the issues that women face in our subfield! Our two entries in the series have already been viewed more than 800 times, and Meg Shannon’s very interesting article on barriers that women face in methodology has been shared more than 100 times on Facebook and 40 times on Twitter.
Indeed, our series has even gotten some attention over at political science’s anonymous internet hive of villainy, and there are some serious issues raised there that I think deserve a hearing. Note: this post is not an entry in our series, and Meg didn’t edit it; I alone am responsible for what is about to unfold.
I begin by copying the note in full, with some edits to remove strong language and/or personal attacks. Our anonymous correspondent writes:
Looks like the Methodologists have decided to address the gender gap in their subfield:
Have to say, it’s a nice step. But I’m betting they don’t touch on one of the biggest problems: political methodology is the least friendly, lowest-reward field in political science.
Yes, all the issues related to women and math, women in the classroom, and gender-relations are important. And deserve a great deal of attention. Let’s hope the series of articles helps.
But… what other area in political science gets into such snotty, picayune, ad-hominem kind of fighting than Methodology? The yearly meeting is like NASCAR: people are watching to see what fights break out. People spend years in back-and-forth papers that don’t get published and are years behind the field of statistics… Even when someone’s methods are pointed out as laughable, it doesn’t change practice…
…The attempt to play gotcha with papers is nowhere more pronounced than at methods panels and meetings. Even the discussant practice of RESTATING THE ENTIRE PRESENTATION YOU JUST SAW that predominates formal theory panels is really just a chance for the discussant to say “see, I figured out what you did and can say it better and propose a new model that people should see is far superior to yours”.
Plus, what do you get by BEING a methodologist? The number of jobs is REALLY low, the people who are in it like to try publishing in statistics journals (or, as a second-place prize, statistical-software journals), and you’re often seen by the rest of the department as the stats consulting office, rather than someone doing political science. Even if you want to argue that Poli Sci gets something useful out of having our own branch of methods people (and I don’t — it’s all applied math/stats, so if that’s your thing, go do that) you cannot deny that most of these people have HUGE envy issues with econ/stats/physics that makes them alternately insufferable or irrelevant.
That some men want to spend their days trying to prove they are better enough at math than you that their opinion on your models should be gospel, and yet good enough at politics that they should be consulted on topics because a full on statistician won’t “get it”, is surprising. That even fewer women want to, is not.
As I read it, there are two key arguments in this note.
- Political methodology does not meaningfully contribute to political science OR quantitative statistical analysis. Our research is mostly something copied out of a back issue of Econometrica or JASA with a political science application appended. Ergo, methodology does not command respect in the political science community or provide ample opportunities on the job market. As a result, careers in methodology are less attractive to women, who must be especially attuned to career viability because of the other barriers that they face.
- Methodology is a field more concerned with debate, competition, or one-upmanship than scholarly contribution. These priorities emerge in the methods meetings, where discussions are either unhelpful or outright rude. These communication styles are culturally identified as masculine and are threatening to women. Consequently, women are disproportionately dissuaded from participating in methodology.
Let’s talk about whether “political methodology is the least friendly, lowest-reward field in political science” and what we can do to improve.
Political methodology: creating new quantitative tools
I’ll start with the first point: methodologists do not do original or high-quality work in quantitative/statistical science. I must begin by admitting that not every article in political methodology is paradigm-shifting or revolutionary: some articles publicize and demonstrate the use of techniques created in other fields that political scientists have not yet adopted. In my opinion, this is important work. I believe that part of our responsibility as methodologists is to help our colleagues and students design and execute the best research possible, and that requires us to promote and illustrate developments in epistemology and quantitative analysis across all fields. The average statistician has no such responsibility, or even a familiarity with the basic problems and tensions that we face in our day-to-day research.
However, I believe that methodologists are also useful and necessary to every social scientific discipline precisely because it is the priorities of the substantive research in those disciplines which drive the creation of new methods that are particular to and inspired by the substantive research agenda. For example, it was economists Philip and Sewall Wright who drove the development of instrumental variables techniques (Stock and Trebbi 2003). These techniques arose as a solution to a specific problem in economics: how do we separately identify demand and supply curves when observable price-quantity data is simultaneously determined by them? To be sure, statisticians contributed to what we now know about instrumental variables models (e.g., Wald 1940). But it is fair to say that IV estimators are inspired by substantive economic research and their development was initiated by econometricians.
The same story can be told about a great many quantitative models. For example, the Rasch (1960) model was developed in psychometrics to solve an educational testing problem; this model underlies the item response theory models now widely applied across many fields. In our own field of political science, quantal response models were developed by McKelvey and Palfrey (1995, 1998) in response to a demand for a closer relationship between formal theoretical models and the empirical models that were used to test them.
Even if methodologists do not create a brand new method with every paper, they often combine a clear presentation of proper statistical practice with the creation of software packages that make it practical to implement a new method in day-to-day research. King, Tomz, and Wittenberg’s Clarify software is one apt example of this idea; their article lays out how parameteric bootstrapping can be a useful and powerful technique for solving practical problems in political research and provides researchers with a tool that allows them to use it in their work (King, Tomz, and Wittenberg 2000; Tomz, Wittenberg, and King 2003). The same idea applies to Brambor, Clark, and Golder’s 2006 article about testing interaction hypotheses; there is no new model, but the article lays out a clear and compelling standard for testing interaction hypotheses (with software!) that has powerfully influenced our field. In both cases, these papers were responding to a need that first arose organically from the substantive literature (the requirement to test hypotheses in non-linear and/or interacted models).
Finally, there is good evidence (quantitative, natch) that political scientists find what we are doing very useful indeed. The premier journal of political methodology, Political Analysis, has the fifth-highest ISI impact factor in Political Science in 2012. Additionally, as of today, the most-cited article in American Political Science Review is Beck and Katz’s 1995 article on “What to Do (and Not to Do) with Time-Series Cross-Section Data.” It even beat out Riker and Ordeshook’s much-older Theory of the Calculus of Voting! That’s pretty good, I think. In short, methodological work gets cited by political scientists.
Political methodology: contributing to substantive problems in political science
I just argued that substantive problems in political science influence the methodologies that we create, giving multiple examples along the way. But what about the converse: does methodology change (and hopefully improve) the discussion of substantive political problems, or is it all for naught?
This is a big question, and one that I cannot hope to answer comprehensively in this article. But to give some insight into this question, I decided to consult the Summer 2013 issue of Political Analysis that I happened to have on my desk as I wrote this piece. What is in there?
- Two articles on the use of text as data for the analysis of political problems, one that reviews the overall tensions and opportunities for political science and another that zeroes in specifically on determining how well text analysis techniques can measure latent political traits (like ideology) compared to human coding of those texts.
- An article that develops a new model for analyzing individual-level, survey panel data of preferences, demonstrating in real data that previous models distort our understanding of the link between government policy and political support.
- An article that develops a new measure of democracy, finding that prior measures consistently misclassify regimes.
- Three articles that debate the gene-ideology link evidently established by twin studies in the nascent field of genopolitics; the debates are about whether the twin studies allow the inferences that were actually drawn in those articles.
Yes, some of these papers can get pretty technical and narrow. Quoting from Shultziner’s article:
It should be recalled that members of MZ [monozygotic] pairs also have different traits under the same environmental effects in all the twin studies. Their trait resemblance is only relative to members of DZ [dizygotic] pairs on average; it is not absolute. Different traits often do result from the same shared genotype of any given pair of MZ twins. Only on statistical average do the traits of members of MZ pairs tend to be more alike relative to members of DZ pairs. Furthermore, unlike the studies in developmental biology that were noted above, twin studies do not examine the adaptive political repertoires, the possible range of political reactions, or the degree of behavioral plasticity of the individuals studied. (Shultziner 2013, p. 353)
Uh, right. But there can be no doubt that these technical questions bear on a very politically important idea: “The twin studies method thus can neither prove nor refute the argument for a genetic basis of political traits such as liberal and conservative preferences or voting turnout.” Insomuch that science is about sweating the details, answering many small questions along the way in order to answer a much bigger overall question that drives our curiosity, I argue that these articles represent political science in the fullest senses of the words political and science.
Political methodology: its norms and folkways
The issue of political methodology’s communicative norms is the one that I feel least qualified to speak on. After all, I am a relative newcomer to the scene. That said, I have been witness to some discord at the methods meetings. I often don’t find that kind of debate especially productive, but that is not why I feel compelled to address it here.
The reason I address it now is because, as Meg pointed out, there are empirical reasons to believe that “Women may be less motivated than men to study political methodology because of norms in the classroom and graduate seminars.” Furthermore, I believe that it is unreasonable to expect these women to simply toughen up and learn how to take (and throw) a rhetorical punch.
The reason I believe that the onus is on us (the methods community) to adapt to the problem, rather than for women methodologists to adapt to us, is that methodology does not operate in a cultural vacuum. In order to succeed in science, women must overcome a constant stream of messages that they are unsuitable for science–messages delivered by, among others, the president of Harvard University. These messages vary by culture, causing consistent cross-national differences in the gender gap between men and women in (e.g.) mathematical skills.
If a woman has made it to the Annual Meeting of the Society for Political Methodology, I suspect that she has already taken more than her requisite number of rhetorical punches. We cannot change the whole culture–at least, not immediately and not by ourselves. But, although the POLMETH is a pretty small chunk of that culture, we do control it.
None of this is to say that standards should be lowered or differential accommodations should be made; far from it. I simply suggest that we let go of the myth that insight is correlated with sarcasm, snark, or cruelty. Criticism need not be personal to be effective. I’m as prone to some of these problems as the next methodologist, so I know how tough this can be. But I do think we ought to try.
Conclusion: Political methodology as a career
I want to conclude by welcoming women into methodology: it’s a vibrant and growing field with deep roots in substantive political problems, quantitative analysis, and epistemology. I encourage you to find a mentor who is willing to help you pursue these interests.
Moreover, I think that (contra our correspondent’s note) it can be a very good career move. There are a comparatively small number of jobs in pure methodology, but many more which ask for some methodological teaching ability alongside one of the substantive fields. Perhaps most importantly, an education in political methodology opens doors outside of academia alongside those it opens within. Data scientist is apparently “the sexiest job of the 21st century” according to the Harvard Business Review, and there is ample opportunity in this field. According to McKinsey and Co., “The United States alone faces a shortage of 140,000 to 190,000 people with analytical expertise and 1.5 million managers and analysts with the skills to understand and make decisions based on the analysis of big data.”
Brambor, Thomas, WR Clark, and Matthew Golder. 2006. “Understanding Interaction Models: Improving Empirical Analyses.” Political Analysis 14(1): 1–20.
King, Gary, Michael Tomz, and Jason Wittenberg. 2000. “Making the Most of Statistical Analyses: Improving Interpretation and Presentation.” American journal of political science 44(2): 347–61.
McKelvey, Richard D., and Thomas R. Palfrey. 1995. “Quantal Response Equilibria for Normal Form Games.” Games and economic behavior 10(1): 6–38.
———. 1998. “Quantal Response Equilibria for Extensive Form Games.” Experimental economics 1(1): 9–41.
Rasch, G. 1980. Probabilistic Models for Some Intelligence and Attainment Tests. Chicago: University of Chicago Press.
Shultziner, Doron. 2013. “Genes and Politics: A New Explanation and Evaluation of Twin Study Results and Association Studies in Political Science.” Political Analysis 21(3): 350–67.
Stock, James H., and Francesco Trebbi. 2003. “Retrospectives Who Invented Instrumental Variable Regression?” The Journal of Economic Perspectives 17(3): 177–94.
Tomz, Michael, Jason Wittenberg, and Gary King. 2003. “CLARIFY: Software for Interpreting and Presenting Statistical Results.” Journal of Statistical Software 8(1): 1–30.
Wald, Abraham. 1940. “The Fitting of Straight Lines If Both Variables Are Subject to Error.” The Annals of Mathematical Statistics 11(3): 284–300.