El Duvelle on Nostr: For the first time, I reviewed a paper that I am 95% sure has been written with ...
For the first time, I reviewed a paper that I am 95% sure has been written with #GenAI (at least partly). I was both horrified and fascinated, and also had many questions:
Should manuscripts be automatically rejected if "GenAI" is used to write them, even if the contents make sense? (main reason: breach of trust between authors and readers)
How can we prove that a manuscript is AI-generated?
Should we keep a list of 'cues' that strongly suggest GenAI has been used to write a paper? What if the companies get hold of those and use them to fix their models?
How can we inform scientists about this increasing risk? I'm pretty sure many of them would not even look for signs of AI-written text / images and would consider any problems to be good faith errors instead of the authors lacking fundamental knowledge about the topic they're writing.
Lastly, even if one is not immediately opposed to the use of GenAI in scientific productions, the main problem is that these tools are not truth-oriented, and produce negative value publications (adding unsupported or false statements into the publication pool). Only an expert can check the contents, but if an expert was writing a paper they wouldn't need the GenAI to write for them.
Looking forward to any answers or just discussions on any of these points!
#Publication #PeerReview #Science #Research #AI #ChatGPT #LLM
Should manuscripts be automatically rejected if "GenAI" is used to write them, even if the contents make sense? (main reason: breach of trust between authors and readers)
How can we prove that a manuscript is AI-generated?
Should we keep a list of 'cues' that strongly suggest GenAI has been used to write a paper? What if the companies get hold of those and use them to fix their models?
How can we inform scientists about this increasing risk? I'm pretty sure many of them would not even look for signs of AI-written text / images and would consider any problems to be good faith errors instead of the authors lacking fundamental knowledge about the topic they're writing.
Lastly, even if one is not immediately opposed to the use of GenAI in scientific productions, the main problem is that these tools are not truth-oriented, and produce negative value publications (adding unsupported or false statements into the publication pool). Only an expert can check the contents, but if an expert was writing a paper they wouldn't need the GenAI to write for them.
Looking forward to any answers or just discussions on any of these points!
#Publication #PeerReview #Science #Research #AI #ChatGPT #LLM