Peer Review 2027
Last November, I hosted an unusual kind of conference here in Florence.1 Rather than presenting new work from within a given subfield or discipline, this was a convening of journal editors from across the disciplines of political science, communication science and sociology. The goal was to figure out how to adapt the peer review and publication process to the world of LLMs we increasingly inhabit.
I was super excited about this opportunity to engage in serious metascientific theorizing from actors with the knowledge to be able to think seriously about possible futures and the institutional power to actually work towards those futures. We had representatives from established “top” disciplinary journals (Journal of Communication, Sociological Science, American Journal of Political Science) as well as more specialized, dynamic subfield journals (Political Communication, Political Science Research & Methods, Journal of Experimental Political Science, and of course, Journal of Quantitative Description: Digital Media).
I want to emphasize that this kind of interdisciplinary editor-driven collaboration is extremely uncommon; for me, simply getting to share our tacit knowledge was worth the price of admission, and I strongly encourage more editors to organize such events, and funders to help host them. I had a blast. Thanks to everyone who participated.
The result of this meeting is a new working paper that we hope will provide a roadmap for academic institutions weather the coming storm of technological change. It is essential that social scientists take an affirmative stance on this issue, and we believe that journals and editors are the actors best suited to this task. Journals and editors are the primary actors tasked with vetting the quality, validity, relevance and importance of knowledge produced by the academic system.
Peer Review 2027: Scenarios for Academic Publishing in the Age of AI
(note that the following discussion is my gloss of the paper, not necessarily something that every author will agree with)
Accepting the premise that LLMs and associated advances are a potentially revolutionary epistemic technology, we found it essential to think beyond individual policies or reforms. Academic publishing is a dynamic and extremely competitive ecosystem, so thinking about individual cause-and-effect relationships is insufficient; we need to think about different equilibria, different possible futures. These possibilities were evaluated in terms of both normative desirability (after we outlined the various competing goals of peer review and journal publication more broadly) and game-theoretic stability.
While I desperately encourage you to read the whole thing, a key upshot of this exercise is the weakness of a equilibrium in which authors are restricted in their use of AI. Regardless of normative desirability (and I think there are very legitimate concerns about the quality and especially diversity of research which leans heavily on LLMs), there is simply too much of an incentive for individual authors to defect in such an equilibrium, and detection of AI use for many relevant tasks will always be extremely fraught. The rule against plagiarism is a stable equilibrium precisely because detection is so dispositive; AI use functions very differently.
Based on this analysis, we advocate for an equilibrium in which authors are allowed to use AI, and explicitly encouraged to do so for some concrete, practical tasks which we discuss in the paper. But the peer review process is already under heavy strain from increased submission rates; this move will only work if it is accompanied by a collective shift in how much effort we allocate to the task of evaluation. This is where we should increase our investment, in terms of resources and prestige. Although producing knowledge (or, at least, publishing papers) has been the primary coin of the academic realm, as LLMs become able to automate more and more steps of the research process, we need to double down on our most valuable asset: taste.
As I wrote a few weeks ago, if we want things to stay the same (here, if we want humans be in charge of the social science process), things will have to change.
The ad hoc nature of peer review is itself an unstable equilibrium; individual scientists are incentivized to shirk, that is, not to contribute their share of “service to the discipline” in the form of high-quality peer reviews and indeed serving as editors or at least on editorial boards. The system of universal, external peer review has muddled through since it was more or less accidentally invented in the US in the postwar era (a history we touch on in the paper), but it cannot sustain itself if even a small subset of authors supercharge their rate of pdf production .
This means different standards for LLM use by reviews than by authors. We’re not here to make a moralistic stance about which technologies are “good” or “bad”; we’re trying to design a system which is both robust to gamesmanship and which has good outputs. And we think that this is the most important part of the loop to insist on having humans in.
There are a number of institutional innovations that we encourage journals to test out, including the use of LLMs for concrete tasks like computational reproducibility checks, but even with an increase in the total human time spent on knowledge evaluation and moderate efficiency gains, we anticipate that increased production might still be a problem. Here, it may be necessary to impose additional frictions, to offload some of the work of evaluation to the authors themselves; the incentives to simply flood the zone are otherwise irrestible. (See this experiment in a year of “vibe researching” by Joshua Gans…and that was mostly before Claude Code w Opus 4.5!).
Submission fees are one such example, already in use in some disciplines and jouranls. There are of course implications for resource inequalities, but these can be mitigated with targeted exemptions. A stronger option is hard caps on the number of simultaneous submissions by authors, or the number of submissions per year. This option comes with obvious downsides for scientific production, and we do not yet endorse it, but some version of this may prove necessary unless we can get the institutional setup right.
We think that it is essential to move quickly with some of these reforms because the current equilibrium is extremely unstable. Ambiguity abounds around the use of LLMs in social science; as a result, the least scrupulous may be the heaviest adopters. We need to create common knowledge about LLM use, to normalize this use, and to encourage researchers to do so in way that enhances scientific knowledge production. Academic journals are the actors best suited towards guiding social science to a better place.
However, both the path to this equilibrium and the “equilibrium” itself will not serve unless we have the necessary data to adapt to both further technological developments and to the uptake and proliferation of this technology. We need to develop metascientific data streams that we can feed back into the system. This means trace data on submission and publication rates as well as qualitative data from surveys of authors, reviewers, and editors. There is not going to be a single set of policies that will continue to function; the system has to be adaptable, and adaptation is only possible with feedback.
The status quo is no longer tenable, and it’s not like it was perfect, anyway. This is both dangerous (if we do nothing) and exciting (if we can figure out the right things to do, and then do them, and then keep figuring out what the next right things to do are, and then do those too). To quote the final line of the paper:
The peer review system belongs to the social science community as a whole; it is ours to reform, and now is the time to do it.
Thanks again to all the co-authors and participants in this workshop, as well as the funding from the EUI Research Council which made it possible.
Thanks to funding from the EUI Research Council!



The simple idea of limiting submissions to one per author per calendar year -- no exceptions -- is brilliant. It is totally incentive-compatible at the journal level and would have clear aggregate benefits for the discipline.
I know some people who would pitch a fit about their "right" to submit as many papers as they want, but that's silly. Who cares. Create incentives for people to submit their best work only and watch the knock-on benefits for everyone.
Thanks for sharing Kevin! I was actually just writing an opinion piece about volunteering for publication reform efforts, so this is super useful. And also great you all are thinking ahead--not exactly a trait academic publishing is known for.