Meta-Science is Actually Pretty Hard
Against Unscientific Reflections on Disciplinary Best Practices!
This blog is about the internet and about meta-scientific approaches to understanding and improving social science practice (If you’re only interested in the former, may I suggest you skip this post and check out this article I published in Real Life earlier this year?).
There are many people actively working in Meta-Science—conducting empirical research, thinking through hard problems, engaging in an increasingly lively public discourse, and most importantly, building new institutions. It’s an exciting time! There are a crop of young scholars who have inherited a creaking set of practices and institutions but who are committed to using our skills to reform social science; there are established-but-not-complacent tenured academics with the good will to support this endeavor, rather than pulling the ladder up behind them.
So I was initially excited to see a recent symposium in the American Political Science Association-run journal PS: Political Science and Politics—its organ for discussions about the discipline, and thus potentially the best outlet for serious Meta-Science—on the topic of desk rejects. The content of the articles—and in particular, the data-free contributions by James Gibson—left me somewhat disappointed.
My complaint is two-fold. First, as a matter of academic pride, I am far from unique in bristling when other scholars make incursions into my area of expertise, bringing their own assumptions rather than engaging with research that others have worked hard to produce. Second, the specifics of journal publication practices are psychologically brutal; I can attest to this firsthand, and I know that many other early-career researchers agree.
People Are Already Working On These Questions
I am puzzled. Gibson is an esteemed and experienced political scientist, one of our most prolific publishers, and no stranger to the value of empiricism or quantitative methods. Why (or perhaps, how) does he think that he has access to the scientific knowledge he needs on the topic of desk rejects?
As I’ve argued before, the intuitions of the major gatekeepers of our discipline are a crucial input into the scientific process. Gibson’s candor in giving advice about a topic about which he professes ignorance may give some insight to other young scholars about the intuitions of “the 1%” of our discipline.
After statements like
“On this and many of the empirical issues I address in this article (e.g., the benefits of manuscript reviews, term limits, and the financial benefits of publications), there is scientific evidence that could be cited and consulted.”
(Imagine replacing the content of the parentheses with another topic in political science!) and
“I do not profess to know much about the financial aspects of journals, but at least some seem to be profit-making and others are sponsored by wealthy organizations such as the American Political Science Association.”
We get, in the response article to his critics, titled “Best Practices for the Use of Desk Rejects”
“further addressing the frailties of the desk-reject system is perhaps not the best use of anyone’s time. Instead, I suggest the following best practices that all editors may want to consider”
I disagree! We should absolutely want to address the frailties of the peer review system, and we should want to do so in a rigorous scientific fashion. This is why we need to institutionalize Meta-Science as Political Methodology. But until that happens, I believe that good empiricists should be skeptical about the possibility of knowledge of “best practices.”
To be fair, this is a dynamic area of research, much of which is still unpublished or published in interdisciplinary venues that are admittedly difficult to keep track of. But that doesn’t excuse the fact that none of the articles in the Symposium cites “Does Peer Review Identify the Best Papers?”, published by Justin Esarey in 2017 in the same journal.
This paper uses a simulation approach to note that under plausible assumptions, there is significant arbitrariness in the process of peer review, but that “a peer review system with an active editor—that is, one who uses desk rejection before review and does not rely strictly on reviewer votes to make decisions—can mitigate some of these effects.”
From Esarey (2017)
Peer Review is Arbitrary, and Brutal for Young Scholars
The world has changed. Current senior scholars entered academia during a golden age of expanding undergraduate enrollment and a lack of competition for jobs in the professoriate (among people who were fortunate enough to be in a position to get a PhD).
Graduate students must publish to have a chance to get a job. But journals—especially top ones—have plummeting acceptance rates (driven by increased submissions) that entail increasingly arbitrary peer review decisions. If the APSR has to accept fewer papers than there are quality submissions, rejections happen on arbitrary (or worse, nepotistic) grounds.
Perhaps worse is just how long the process takes. Our careers are hanging in the balance; editors universally report that the biggest constraint on their speed is securing reviewers and then begging those reviewers to follow through. Preserving the pool of resources is thus of immense and immediate import to many precarious early-career researchers. Our only hope for surviving the harrowing numerical filter that makes it impossible for more than ~20% of incoming PhD students to end up in any kind of tenure-track gig is to establish a publication record far earlier than anyone would prefer. The delays involved in peer review have been the single most psychologically difficult element of my early career as a scholar, and I can report widespread agreement on this point among my peers.
We thus already experience the peer review process as cruel and arbitrary. Gibson is concerned that one of the issues with desk rejection is “that the rejection is procedurally unfair,” but so is peer review.
Gibson elects to compare desk rejection to the use of “Stop and Frisk” tactics by the NYPD; if we’re going to be in the business of hyperbolic analogies to criminal justice, I’ll toss out the principle that “justice delayed is justice deferred.” Everyone has stories of initial peer review lingering well over the year mark, only to hinge on a half-baked referee report that could only have come from a reviewer with a grudge and a hangover.
Thanks for the Data!
In addition to complaining, I wanted to call attention to the rest of the papers in the symposium. Although each paper uses an ad hoc measurement strategy (a sign of an immature science), they are all valuable contributions by people who are actively involved in the peer review process, as editors of major journals. To their immense credit, they bring data (both quantitative and qualitative) to the conversation. It is my hope that we can develop more rigorous and extensive metrics, that we can formalize some of these excellent contributions; that we can do science.
Garand and Harman conduct a survey of editors and an analysis of the data from American Political Science Association-affiliated journals APSR, POP and PS. With the submission data, they estimate the likelihood of desk rejection versus publication according to a variety of attributes of the manuscript and authors. There are a few interesting divisions by subfield, but the topline result is that they can rule out at least blatant inequities in the application of desk rejection
Dolan and Lawless provide some data about desk rejects from AJPS, including the essential descriptive fact that over half of the articles they’ve desk rejected are facially inappropriate for AJPS (eg “journalistic or opinion pieces, review essays, or those that contain no data or theoretical argument.”) They argue that we need other institutions for providing feedback to authors, and that the current incentive structure of academic publishing is rife with opportunistic “over-fishing”:
Many scholars “aim high” in their submissions. They know that they have only a slight chance of having a manuscript accepted at a top journal, but their likely consolation is three helpful reviews. This misuse of the journal-submission process must change.
Ansell and Samuels take their empirics the farthest, tracking the citations to papers that CPS either desk rejected, review rejected or published. These authors are doing meta-science, whether they identify it as such or not. Their analysis isn’t perfect (there’s no experimental design, and it cannot separate the causal effect of publication in a top subfield journal from the selection effect they’re looking for), but it’s a good first step in accumulating actual data about the publication process.
They also win the award for “spiciest sentences of the symposium”:
Third, Gibson suggests that journals should discourage graduate students from submitting. Most graduate students, he argues, do not “produce publishable work” and allowing “unworthy graduate student papers to muck up the review process is much too high a price to pay.” This is simply elitist. We could equally suggest that most older scholars are long past their prime and should submit fewer papers.
Bonneau and Kanthak describe their experience editing SPPQ. They make the argument that editorial discretion is not going away; they have the power to select reviewers (likely, reviewers who share their broad orientation) and to make final decisions based on reviewer comments. This is an important and eminently plausible theory; I’d love to see it operationalized and studied empirically.
Brown discusses the publishing-as-mentorship model in which the editor puts in serious work to improve the process, throws parties and rewards reviewers. This is what it looks like when you’re actually invested in building up an institution, as she has done with PGI. It requires that established people with sufficient resources devote time in a basically selfless fashion. So while I don’t think that the publishing-as-mentorship model will necessarily scale up, I think it is admirable, and something that I hope to incorporate more as a journal editor.
In the spirit of broadening the Meta-Scientific conversation, I can report on my (brief) experiences editing a journal, as well. Earlier this year, I co-founded the Journal of Quantitative Description: Digital Media. The full manifesto is here, but a big part of our Meta-Scientific intervention was the piloting of a new model for peer review.
Instead of burning reviewer time to generate prestige through a low acceptance rate, we aim to accept a high percentage of manuscripts sent out for review. And we guarantee a desk rejection rate of zero for full manuscripts.
How do we do this? By mandating a pre-submission “Letter of Inquiry,” the details of which are here. This involves a high degree of editor involvement at the very top of the “funnel” of submissions. If we accept the LOI and invite the full manuscript, we guarantee that we agree that the research is a good fit for our journal and that it passes our standards for importance and basic methodological soundness. This removes that component of evaluation from the reviewers, giving them more space to suggest improvements to the manuscript rather than find nits to pick.
We are recording the submissions we receive and will have a report on the results of this practice in a few years.
Finally, Hassell is broadly against desk rejections. He objects to unblinded desk rejection as exacerbating inequalities, and cites some legit meta-scientific studies to this effect. That’s entirely fair. And I am totally on board with his suggestion that we simply require people to review papers for a given journal in order to submit papers there. That we do not do this already is, in my view, a scandal.
It’s true that there is inequality in the discipline of political science (and other academic disciplines). But the broader critique here—that the solution to this problem is to reduce desk rejections—is off-base, for two reasons.
First, the internet makes all sorts of new institutions possible, and we need to re-organize academic practices to take full advantage. Shoehorning feedback from peers through the sclerotic institutions of journal review is a short-sighted, high-waste and low-reward way to address the inequality in socialization opportunities that Hassell correctly identifies.
We have to build better institutions. Our professional organizations should be taking the lead here, but they are … struggling … to adapt to the realities of the modern digital environment. The online APSA 2020 was more of a dumpster fire than APSA 2014, in which one of the hotels literally caught on fire. Many of the authors in the symposium point to the need for better institutions, but we have to do it ourselves. As we write in the JQD:DM manifesto, “critique alone does not change the material conditions and incentives of practicing academics.”
Second, as I have argued previously, the realities of the “industrial organization” of social scientific knowledge production are moving towards concentration and thus inequality. The cutting edge in particle physics research requires huge concentrations of capital. While this does entail inequality between physicists who have access to the Large Hadron Collider and those who do not, the goal of theoretical physics is not to maximize equality between physics professors. Equality is preferable to inequality, but it is not the defining criterion for how we should organize social science.
Still, social science reformers need to think through (and test!) the sociological implications of their reforms. Part of why I am such a big proponent of Quantitative Description is that it offers an avenue for scholars with fewer resources to make essential contributions to knowledge. As standards of rigor for causal inference and now external validity rise, the number of research teams who can afford to do this kind of work decreases. Rather than denying this reality, we should elevate the status of other kinds of work.
To conclude, then, a note to Professor Gibson. I have been critical, but I hope I’ve been able to explain where I’m coming from. It is clear that your goal in in writing these articles was noble, that you wanted to improve social science research practice. For this, I am thankful; the default stance, for successful senior scholars, is institutional complacency.
The issue of social science reform will not go away; instead, I hope that Meta-Science will become institutionalized as part of methodology research. So I would love for you (and the other authors in this symposium, and….anyone who wants to!) to dig into this research and become an active member of the scientific reform community. We can use science to figure out how to make better science!