TPM: Policy on Data Access and Research Transparency

After some deliberation, the editorial team at The Political Methodologist has formulated a policy on data access and research transparency that will apply to all TPM submissions from this point forward. The policy is laid out in our instructions for authors, but we also print it here in full.


Authors contributing their work for publication in The Political Methodologist must agree to certain requirements allowing readers to access the data and analytic procedures used to produce the submitted research. These conditions are based on and adapted from the Data Access and Research Transparency (DA-RT) statement available at http://www.dartstatement.org/.

  1. Authors are required to make supporting materials for any quantitative analysis available on The Political Methodologist’s dataverse before the paper is published. These materials must include:

a) any quantitative data used in their analysis

b) any analytical procedures used to produce quantitative results (e.g., computer code, Stata do files, R script files, spreadsheet calculations, mathematical proofs, etc.).

If there are any restrictions on making the data publicly available (e.g., because of proprietary agreements or Institutional Research Board confidentiality restrictions), authors should notify the editors of these restrictions when the article is submitted. At that point, the editor will either (a) issue a decision granting an exemption to the data sharing policy (with or without conditions) and then review the article, or (b) decline to review the article.

  1. Qualitative sources of supporting information (including but not limited to interview transcripts, copies of archival documents, and ethnographic field notes) may be exempted from the transparency and data-sharing requirements spelled out in item (1) above at the author’s request. However, authors must explicitly request an exemption for qualitative sources of supporting information at the time of submission. At that point, the editor will determine whether the sources qualify under this policy and (a) issue a decision granting an exemption to the data sharing policy (with or without conditions) and then review the article, or (b) decline to review the article.

Note: If an exemption is not explicitly requested at the time of submission, it is expected that authors will make qualitative sources of supporting materials available on The Political Methodologist’s dataverse before the paper is published.

  1. Any supporting materials exempted from transparency and data-sharing requirements must be explicitly identified as such upon publication (e.g., by a footnote indicating this exemption and the reason for the exemption).
  1. Authors must provide appropriate citations to any quantitative data and/or sources of qualitative information used in their analysis. For quantitative data, the citation must identify a dataset’s author(s), title, date, version, and location (e.g., an internet URL or other persistent identifier). Referenced sources (data, archival documents, secondary sources, historical documents, etc.) must be properly cited regardless of any exemption from transparency and data-sharing requirements.
Posted in Editorial Message, Replication | Leave a comment

Print Edition Released: Special Issue on Peer Review

The latest print edition of The Political Methodologist (Volume 23, Number 1) has been released! This edition features our special issue on peer review.

 

Posted in Uncategorized | Leave a comment

IMC: Emily Ho talk, “Developing new practices for increasing transparency in social science research,” Friday 2/12 at 12:00p Eastern

This Friday, 2/12 at 12:00 noon Eastern, the International Methods Colloquium will host a talk by Emily Ho (Psychometrics & Quantitative Psychology, Fordham University) titled “Developing new practices for increasing transparency in social science research: an investigation of statistical and psychophysical methods used to detect change.”

The abstract for the talk is:

The reproducibility crisis in social science disciplines (e.g., political science, economics, and psychology) has called for a paradigm shift in the way researchers design, execute, and evaluate their studies. Empirical distributions, or data arising from real-world phenomenon, are often non- normal. Though researchers often wish to compare such distributions to assess the magnitude of an effect, most of the literature has focused on the normal distribution, which is as prevalent in reality as unicorns are (Merrici, 1989). Thus incongruent approaches abound of how to best capture differences between empirical distributions, contributing to the lack of transparency in social science research. These inconsistencies reveal the need for a robust theoretical and experimentally defensible framework for designing and evaluating research involving empirical distributions. The current work summarizes and theoretical, experimental, and psychophysical approaches to further expand methodological tools in social science research.

To tune in to the presentation and participate in the discussion after the talk, visit http://www.methods-colloquium.com/ and click “Watch Now!” on the day of the talk. To register for the talk in advance, click here:

https://attendee.gotowebinar.com/register/2933428126011215873

The IMC uses GoToWebinar, which is free to use for listeners and works on PCs, Macs, and iOS and Android tablets and phones. You can be a part of the talk from anywhere around the world with access to the Internet. The presentation and Q&A will last for a total of one hour.

Posted in Uncategorized | Leave a comment

IMC: Roundtable on Preregistration of Research Designs on Friday, 2/5 at 12:00 eastern

This Friday, 2/5 at 12:00 noon Eastern, the International Methods Colloquium will host a roundtable discussion on the movement to encourage preregistration of research designs. Our scheduled roundtable participants are:

  1. Macartan Humphreys of Columbia University
  2. James Monogan of the University of Georgia
  3. Arthur Lupia of the University of Michigan
  4. Ryan T. Moore of American University

To tune in to the roundtable and participate in the discussion after the talk, visit http://www.methods-colloquium.com/ and click “Watch Now!” on the day of the talk. To register for the talk in advance, click here:

https://attendee.gotowebinar.com/register/6333590869091694593

The IMC uses GoToWebinar, which is free to use for listeners and works on PCs, Macs, and iOS and Android tablets and phones. You can be a part of the talk from anywhere around the world with access to the Internet. The roundtable will last for a total of one hour.

Posted in Uncategorized | Leave a comment

2015 TPM Annual Most Viewed Post Award Winner: Nicholas Eubank

On behalf of the editorial team at The Political Methodologist, I am proud to announce the 2015 winner of the Annual Most Viewed Post Award: Nicholas Eubank of Stanford University! Nick won with his very informative post “A Decade of Replications: Lessons from the Quarterly Journal of Political Science.” This award entitles the recipient to a line on his or her curriculum vitae and one (1) high-five from the incumbent TPM editor (to be collected at the next annual meeting of the Society for Political Methodology).[1]

The award is determined by examining the total accumulated page views for any piece published between December 1, 2014 and December 31, 2015; pieces published in December are examined twice to give them a fair chance to garner page views. The page views for December 2014 and calendar year 2015 are shown below; orange hash marks next to the post indicate that it was published during the time period (and thus eligible to receive the award).

dec-2014-stats 2015-posts

Nick’s contribution was viewed 2,758 times during the eligible time period, making him the clear winner for this year’s award. Congratulations, Nick!

Our runner-up is Brendan Nyhan and his post “A Checklist Manifesto for Peer Review” which was viewed 1,421 times during the eligibility period. However, because Brendan’s piece was published in December 2015, he’ll be eligible for consideration for next year’s award as well!

In related news, Thomas Leeper‘s post “Making High-Resolution Graphics for Academic Publishing” continues to dominate our viewership statistics; this post also won the 2014 TPM Most-Viewed Post Award. Although we don’t have a formal award for him, I’m happy to recognize that Thomas’s post has had a lasting impact among the readership and pleased to congratulate him for that achievement.

I am also happy to report that The Political Methodologist continues to expand its readership. Our 2015 statistics are here:

2015-stats

This represents a considerable improvement over last year’s numbers, which gave us 43,924 views and 29,562 visitors. I’m especially happy to report the substantial increase in unique visitors to our site: over 8,000 new unique viewers in 2015 compared to 2014! Our success is entirely attributable to the excellent quality of the contributions published in TPM. So, thank you contributors!

Notes

[1] A low-five may be substituted upon request.

Posted in Editorial Message, The Discipline | Leave a comment

Call for Paper Proposals: Visions in Methodology 2016 at University of California, Davis

[Ed. note: this post is taken from a message to the PolMeth listserv by Amber E. Boydstun, Assistant Professor at UC Davis. The 2016 VIM conference is hosted by UC Davis and UC Merced.]

The 2016 Visions in Methodology (VIM) Program Committee is now accepting
proposals for papers for the VIM 2016 Workshop to be held at the University
of California, Davis from May 16 to 18, 2016. We invite submissions from
female graduate students and faculty that address measurement, causal
inference, and the application of advanced statistical methods to
substantive research questions.

Participants are expected to arrive at UC Davis by 5:00 pm on May 16. The
workshop consists of two days of research presentations and opportunities
for networking and mentoring and concludes with a dinner on May 18.
Participants are welcome to depart on May 19 or to take part in an optional
trip to Napa Valley on May 19. VIM offers financial support for travel,
lodging, and food for selected participants for activities from May 16 to
May 18.

To apply, please send the title and description of your research project
(100 – 150 words) along with a curriculum vita to
visionsinmethodology@gmail.com. The application deadline is January 31,
2016. Female graduate students and faculty at all ranks are encouraged to
apply, although preference will be given to those at earlier stages of
their careers.

The Visions in Methodology (VIM) workshop is part of a broad effort to
support women in the field of political methodology. VIM provides
opportunities for scholarship, networking, and mentoring for women in the
political methodology community. In addition to providing a forum to
present research, VIM aims to connect women in a subfield of political
science where they are underrepresented.
Program Co-Chairs
Amber Boydstun (UCD); Courtenay Conrad, Emily Ritter (UCM)

Program Committee
Cheryl Boudreau, Adrienne Hosek, Heather McKibben, Lauren Peritz (UCD);
Jessica Trounstine (UCM)

Posted in Call for Papers / Conference, The Discipline | Leave a comment

Acceptance rates and the aesthetics of peer review

Based on the contributions to The Political Methodologist‘s special issue on peer review, it seems that many political scientists are not happy with the kind of feedback they receive from the peer review process. A theme seems to be that reviewers focus less on the scientific merits of a piece–viz., what can be learned from the evidence offered–and more on whether the piece is to the reviewer’s taste in terms of questions asked and methodologies employed. While I agree that this feedback is unhelpful and undesirable, I am also concerned that it is a fundamental feature of the way our peer review system works. More specifically, I believe that a system of journals with prestige tiers enforced by extreme selectivity creates a review system where scientific soundness is a necessary but far from sufficient criteria for publication, meaning that fundamentally aesthetic and sociological factors ultimately determine what gets published and inform the content of our reviews.

As Brendan Nyhan says, “authors frequently despair not just about timeliness of the reviews they receive but their focus.” Nyhan seeks to improve the focus of reviews by offering a checklist of questions that reviewers should answer as a part of their reviews (omitting those questions that, presumably, they should /not/ seek to answer). These questions revolve around ensuring that evidence offered is consistent with conclusions (“Does the author control for or condition on variables that could be affected by the treatment of interest?”) and that statistical inferences are unlikely to be spurious (“Are any subgroup analyses adequately powered and clearly motivated by theory rather than data mining?”).

The other contributors express opinions in sync with Nyhan’s point of view. For example, Tom Pepinsky says:

“I strive to be indifferent to concerns of the type ‘if this manuscript is published, then people will work on this topic or adopt this methodology, even if I think it is boring or misleading?’ Instead, I try to focus on questions like ‘is this manuscript accomplishing what it sets out to accomplish?’ and ‘are there ways to my comments can make it better?’ My goal is to judge the manuscript on its own terms.”

Relatedly, Sara Mitchell argues that reviewers should focus on “criticisms internal to the project rather than moving to a purely external critique.” This is explored more fully in the piece by Krupnikov and Levine, where they argue that simply writing “external validity concern!” next to any laboratory experiment hardly addresses whether the article’s evidence actually answers the questions offered; in a way, the attitude they criticize comes uncomfortably close to arguing that any question that can be answered using laboratory experiments doesn’t deserve to be asked, ipso facto.

My own perspective on what a peer review ought to be has changed during my career. Like Tom Pepinsky, I once thought my job was to “protect” the discipline from “bad research” (whatever that means). Now, I believe that a peer review ought to answer just one question: What can we learn from this article? [fn 1]

Specifically, I think that every sentence in a review ought to be:

  1. a factual statement about what the author believes can be learned from his/her research, or
  2. a factual statement of what the reviewer thinks actually can be learned from the author’s research, or
  3. an argument about why something in particular can (or cannot) be learned from the author’s research, supported by evidence.

This feedback helps an editor learn the marginal contribution that the submitted paper makes to our understanding, informing his/her judgment for publication. It also helps the author understand what s/he is communicating in her piece and whether claims must be trimmed or hedged to ensure congruence with the offered evidence (or more evidence must be offered to support claims that are central to the article).

Things that I think that shouldn’t be addressed in a review include:

  1. whether the reviewer thinks the contribution is sufficiently important to be published in the journal
  2. whether the reviewer thinks other questions ought to have been asked and answered
  3. whether the reviewer believes that an alternative methodology would have been able to answer different or better questions
  4. whether the paper comprehensively reviews extant literature on the subject (unless the paper defines itself as a literature review)

In particular, I think that the editor is the person in the most appropriate position to decide whether the contribution is sufficiently important for publication, as that is a part of his/her job; I also think that such a decision should be made (whenever possible) by the editorial staff before reviews are solicited. (Indeed, in another article I offer simulation evidence that this system actually produces better journal content, as evaluated by the overall population of political scientists, compared to a more reviewer-focused decision procedure.) Alternatively, the importance of a publication could be decided (as Danilo Friere alludes) by the discipline at large, as expressed in readership and citation rates, and not by one editor (or a small number of anonymous reviewers); such a system is certainly conceivable in the post-scarcity publication environment created by online publishing.

Of course, as our suite of contributions to TPM make clear, most of us do not receive reviews that are focused narrowly on the issues that I have outlined. Naturally, this is a frustrating experience. I think it is particularly trying to read a review that says something like, “this paper makes a sound scientific contribution to knowledge, but that contribution is simply not important enough to be published in journal X.” It is annoying precisely because the review acknowledges that the paper isn’t flawed, but simply isn’t to the reviewer’s taste. It is the academic equivalent of being told that the reviewer is “just not that into you.” It is a fundamentally unactionable criticism.

Unfortunately, I believe that authors are likely to receive more, not less, of such feedback in the future regardless of what I or anyone else may think. The reason is that journal acceptance rates are so low, and the proportion of manuscripts that make sound contributions to knowledge is so high, that other criteria must necessarily be used to select from those papers which will be published from the set of those that could credibly be published.

Consider that in 2014, the American Journal of Political Science accepted only 9.6% of submitted manuscripts and International Studies Quarterly accepted about 14%. The trend is typically downward: at Political Research Quarterly, acceptance rates fell by 1/3 between 2006 and 2011 (to just under 12 percent acceptance in 2011). I speculate that far more than 10-14% of the manuscripts received by AJPS, ISQ, and PRQ were scientifically sound contributions to political science that could have easily been published in those journals–at least, this is what editors tend to write in their rejection letters!

When (let us say, for argument’s sake) 25% of submitted articles are scientifically sound but journal acceptance rates are less than half that value, it is essentially required that editors (and, by extension, reviewers) must choose on criteria other than soundness when selecting articles for publication. It is natural that the slippery and socially-constructed criterion of “importance” in its many guises would come to the fore in such an environment. Did the paper address questions you think are the most “interesting?” Did the paper use what you believe are the most “cutting edge” methodologies? “Interesting” questions and “cutting edge” methodologies are aesthetic judgments, at least in part, and defined relative to a group of people making these aesthetic judgments. Consequently, I fear that the peer review process must become as much a function of sociology as of science because of the increasingly competitive nature of journal publication. Insomuch that I am correct, I think would prefer that these aesthetic judgments come from the discipline at large (as embodied in readership rates and citations) and not from two or three anonymous colleagues.

Still, as long as there are tiers of journal prestige and these tiers are a function of selectivity, I would guess that the power of aesthetic criteria to influence the peer review process has to persist. Indeed, I speculate that the proportion of sound contributions in the submission pool is trending upward because of the intensive focus of many PhD programs on rigorous research design training and the ever-increasing requirements of tenure and promotion committees. At the very least, the number of submissions is going up (from 134 in 2001 to 478 in 2014 at ISQ), so even if quality is stable selectivity must rise if the number of journal pages stays constant. Consequently, I fear that a currently frustrating situation is likely to get worse over time, with articles being selected for publication in the most prominent journals of our discipline on progressively more whimsical criteria.

What can be done? At the least, I think we can recognize that the “tiers” of journal prestige do not necessarily mean what they might have used to in terms of scientific quality or even interest to a broad community of political scientists and policy makers. Beyond this, I am not sure. Perhaps a system that rewards authors more for citation rates and less for the “prestige” of the publication outlet might help. But undoubtedly these systems would also have unanticipated and undesirable properties, and it remains to be seen whether they would truly improve scholarly satisfaction with the peer review system.

Footnotes

[1] Our snarkier readers may be thinking that this question can be answered in just one word for many papers they review: “nothing.” I cannot exclude that possibility, though it is inconsistent with my own experience as a reviewer. I would say that, if a reviewer believed nothing can be learned from a paper, I would hope that the reviewer would provide feedback that is lengthy and detailed enough to justify that conclusion.

Posted in Peer Review | Leave a comment