The simple idea of limiting submissions to one per author per calendar year -- no exceptions -- is brilliant. It is totally incentive-compatible at the journal level and would have clear aggregate benefits for the discipline.
I know some people who would pitch a fit about their "right" to submit as many papers as they want, but that's silly. Who cares. Create incentives for people to submit their best work only and watch the knock-on benefits for everyone.
Yeah one upshot of the exercise was to note that there are multiple goals of peer review -- assigning attention and vetting knowledge, for example. And so some "top" journals which prioritize the attention component, it makes perfect sense to constrain the submissions....and then other journals will still be there if we're really worried about slowing down academic progress
And people can always make more journals if space is really a constraint. If the supply of actually good ideas is that big, the discipline can make AEJs and AER-Insightses and Sociological Sciences
Thanks for sharing Kevin! I was actually just writing an opinion piece about volunteering for publication reform efforts, so this is super useful. And also great you all are thinking ahead--not exactly a trait academic publishing is known for.
Also I think you could make a strong tragedy of the commons argument to restrict (or tax) individual productivity as you suggest--even apart from LLMs. Enforcement of course is the problem. I think you could say that the issue is that over-production degrades collective value; the more we publish collectively, the less the whole is worth. I.e. it's in some sense a version of Goodhart's law except with value not just measure.
Exactly -- if we all seriously internalize the cost of knowledge evaluation, it becomes clear that maximizing production is not actually efficiency-maximizing
Thanks, Kevin, as someone who's been considering editing a journal myself, a lot of your great points have been on my mind. Banning or discouraging AI use certainly seems like a losing, if not outright destructive, gamble, while introducing some submission fees (with potential waivers for precarious scholars) and doing mandatory computational reproducibility upon submission seem like slam dunks.
What's your sense of requiring all submissions to have an AI R3/4 using something like the refine.ink service?
I think that journals can try to outsource some of the evaluation component to an LLM that is designed to check certain, concrete elements of the manuscript. (computational reproducibility AND does the analysis match what the manuscript actually says eg). But we need to have control over the system. The idea of saying "here's what humans are doing, what if AI does literally exactly the same thing as the humans" is exactly the impulse I want to avoid...AI is good and bad at different things than humans are, we should use the respective strength of each.
And if we're doing this, I think it's essential that we don't let YET MORE random for-profit corporations enter into the system of academic knowledge production. It costs us increasingly scarce resources and also means giving up autonomy and control (look what happened when we adopted Twitter as the primary mechanism of communication..).
The simple idea of limiting submissions to one per author per calendar year -- no exceptions -- is brilliant. It is totally incentive-compatible at the journal level and would have clear aggregate benefits for the discipline.
I know some people who would pitch a fit about their "right" to submit as many papers as they want, but that's silly. Who cares. Create incentives for people to submit their best work only and watch the knock-on benefits for everyone.
Yeah one upshot of the exercise was to note that there are multiple goals of peer review -- assigning attention and vetting knowledge, for example. And so some "top" journals which prioritize the attention component, it makes perfect sense to constrain the submissions....and then other journals will still be there if we're really worried about slowing down academic progress
And people can always make more journals if space is really a constraint. If the supply of actually good ideas is that big, the discipline can make AEJs and AER-Insightses and Sociological Sciences
Thanks for sharing Kevin! I was actually just writing an opinion piece about volunteering for publication reform efforts, so this is super useful. And also great you all are thinking ahead--not exactly a trait academic publishing is known for.
Also I think you could make a strong tragedy of the commons argument to restrict (or tax) individual productivity as you suggest--even apart from LLMs. Enforcement of course is the problem. I think you could say that the issue is that over-production degrades collective value; the more we publish collectively, the less the whole is worth. I.e. it's in some sense a version of Goodhart's law except with value not just measure.
Exactly -- if we all seriously internalize the cost of knowledge evaluation, it becomes clear that maximizing production is not actually efficiency-maximizing
The enforcement costs seem non-trivial though. Maybe in a different peer review/publishing equilibrium.
Thanks, Kevin, as someone who's been considering editing a journal myself, a lot of your great points have been on my mind. Banning or discouraging AI use certainly seems like a losing, if not outright destructive, gamble, while introducing some submission fees (with potential waivers for precarious scholars) and doing mandatory computational reproducibility upon submission seem like slam dunks.
What's your sense of requiring all submissions to have an AI R3/4 using something like the refine.ink service?
I think that journals can try to outsource some of the evaluation component to an LLM that is designed to check certain, concrete elements of the manuscript. (computational reproducibility AND does the analysis match what the manuscript actually says eg). But we need to have control over the system. The idea of saying "here's what humans are doing, what if AI does literally exactly the same thing as the humans" is exactly the impulse I want to avoid...AI is good and bad at different things than humans are, we should use the respective strength of each.
And if we're doing this, I think it's essential that we don't let YET MORE random for-profit corporations enter into the system of academic knowledge production. It costs us increasingly scarce resources and also means giving up autonomy and control (look what happened when we adopted Twitter as the primary mechanism of communication..).