Alarming that for-profit publishers are getting their journals labelled as diamond OA! This is one reason that the Free Journal Network is useful, so I hope you don't mind me "advertising" it here (https://freejournals.org/). It was created by academics to promote high-quality, legitimately diamond OA journals and share resources to support them, and we have a certification process that would likely exclude the publishers you refer to because one of our criteria is that the journal should be community/academic-controlled.
About the ethics of LLMs and scientific publishing, I know your "anyone talking about it is trying to sell you something" might have been hyperbole rather than something you meant literally, but I'd be interested in your take on some papers by academics on this, such as "The ethics of using artificial intelligence in scientific research: new guidance needed for a new tool" (https://link.springer.com/article/10.1007/s43681-024-00493-8) and "Editors’ Statement on the Responsible Use of Generative AI Technologies in Scholarly Journal Publishing" (https://www.tandfonline.com/doi/full/10.1080/21507740.2023.2257181#d1e300)
Great!! Thanks for advertising that list, I hadn't been aware of it but that's super helpful (and I think the JQD should apply), and thanks for noticing the broken link.
And yes, you're right, it's hyperbole. I just personally find the whole thing so distasteful that I prefer not to think about it too much -- the solutions, such as they are, are going to come from systemic changes rather than hoping that ppl individually abide by these rules. I get that (some) individual people want to do the right thing -- so I think the actual best guidance is here:
Agree, the proprietary and black-box nature of the big tech company AI services is not good for science. Of course, very few researchers today know how to run an open-source LLM, whereas they all know how to go to chatgpt.com or copilot.microsoft.com. So realistically, there needs to be a big effort to provide open-source LLM service for researchers that makes it easy to use. Hopefully someone is already doing that and I just don't know about it.
Alarming that for-profit publishers are getting their journals labelled as diamond OA! This is one reason that the Free Journal Network is useful, so I hope you don't mind me "advertising" it here (https://freejournals.org/). It was created by academics to promote high-quality, legitimately diamond OA journals and share resources to support them, and we have a certification process that would likely exclude the publishers you refer to because one of our criteria is that the journal should be community/academic-controlled.
About the ethics of LLMs and scientific publishing, I know your "anyone talking about it is trying to sell you something" might have been hyperbole rather than something you meant literally, but I'd be interested in your take on some papers by academics on this, such as "The ethics of using artificial intelligence in scientific research: new guidance needed for a new tool" (https://link.springer.com/article/10.1007/s43681-024-00493-8) and "Editors’ Statement on the Responsible Use of Generative AI Technologies in Scholarly Journal Publishing" (https://www.tandfonline.com/doi/full/10.1080/21507740.2023.2257181#d1e300)
Your Simard et al. link looks like it goes to an article by Hanson instead. Here's the right link, I think: https://direct.mit.edu/qss/article/doi/10.1162/qss_c_00331/124449/We-need-to-rethink-the-way-we-identify-diamond
Great!! Thanks for advertising that list, I hadn't been aware of it but that's super helpful (and I think the JQD should apply), and thanks for noticing the broken link.
And yes, you're right, it's hyperbole. I just personally find the whole thing so distasteful that I prefer not to think about it too much -- the solutions, such as they are, are going to come from systemic changes rather than hoping that ppl individually abide by these rules. I get that (some) individual people want to do the right thing -- so I think the actual best guidance is here:
https://www.nature.com/articles/s43588-023-00585-1
don't use propriety LLMs for social science, at all. If you want to use LLMs, make them actually open-source and replicable
Agree, the proprietary and black-box nature of the big tech company AI services is not good for science. Of course, very few researchers today know how to run an open-source LLM, whereas they all know how to go to chatgpt.com or copilot.microsoft.com. So realistically, there needs to be a big effort to provide open-source LLM service for researchers that makes it easy to use. Hopefully someone is already doing that and I just don't know about it.