Skip to main content

Advertisement
Main content starts here

Letters to scientific journals surge as ‘prolific debutante’ authors likely use AI

New study reinforces worries about “mass production of junk” by unscrupulous scholars aiming to pad their CVs

illustration of green lines of numbers going through a pile of paper
A. Mastin/Science

Just 2 days after Carlos Chaccour and Matthew Rudd published a paper in The New England Journal of Medicine about controlling malaria, an editor of the journal shared with them a letter he had received raising “robust objections.” The letter was well written, thought Chaccour, a physician-scientist at the University of Navarra, and Rudd, a statistician at the University of the South. But something unsettled them: The letter cited some of their previous work that did not support its claims. Because the researchers knew artificial intelligence (AI) can fabricate references, they suspected the letter was written by a machine.

That was the start of Chaccour and Rudd’s investigation into more than 730,000 letters published over the past 20 years. They found that from 2023–25, a small group of “prolific debutante” authors suddenly appeared in the top 5% of letter writers. They suspect much of the rise was driven by programs such as ChatGPT, the generative AI chatbot that debuted in 2022, Chaccour, Rudd, and co-authors write in a preprint posted today on the Research Square server. “I was not surprised that it’s happening but was surprised by the magnitude and how blatant it is,” Chaccour says.

Other studies have documented a rise in the share of research articles that bear signs of AI-written text. But this study appears to be the first to examine the phenomenon among letters to the editor—a key venue for postpublication reviews, but also a potential avenue for exploitation by unscrupulous authors aiming to pad their CVs.

The nearly 8000 letter writers who moved from the bottom to the top in productivity had an outsize influence on all letters published: They made up only 3% of all active authors from 2023–25 but contributed 22% of the letters published, nearly 23,000 in all. Their letters appeared in 1930 journals, including 175 in The Lancet and 122 in The New England Journal of Medicine (NEJM). That growth is coming at the expense of a decline in the share of letters signed by others, in what Chaccour calls “a zero-sum ecosystem of editorial attention.”

Newcomers—authors who published 10 or more letters in the first year they published any letter—have been the fastest growing group of authors. The last author of the letter responding to Chaccour and Rudd’s NEJM study—which was not ultimately published—fell into that category: After publishing no letters in 2024, the author, a physician in Qatar, published more than 80 this year. They appeared in journals covering 58 separate topics, an unlikely breadth of expertise for a single scholar.

That extreme productivity leap can’t be explained by broader trends in letter writing. The average annual number of letters written by all authors studied crept up only slightly during the past 20 years, from 1.16 to 1.34 per author, and the number of journals in which the letters appeared has been flat since 2022.

Advertisement

Chaccour and Rudd’s team concluded it would have been prohibitively difficult to subject the mass of letters they studied to an AI text detector. But as a case study, the team ran 81 letters published by the Qatari author through the Pangram AI text detector. The average score was 80 on a 100-point scale indicating the likelihood of AI use. Chaccour’s team also ran the same test on 74 letters published in the late 2000s, before ChatGPT, by another, randomly chosen prolific letter writer. The average score was zero—no hits on any paper.

Letters may be an especially appealing venue for AI use because they are typically short and can appear timely and relevant without requiring substantial original input or data, Chaccour says. Journals typically do not subject letters to peer review. And as an investigation by Retraction Watch and Science reported last year, letter writers can use them as an easy way to inflate their publication counts, which may impress some colleagues who don’t take a closer look.

The findings come as journals continue to grapple with how to police AI-written submissions. Many require disclosure of AI use, but authors underreport. (Chaccour’s study did not analyze the extent of disclosure of AI use by the letter writers, though his case study found that the Qatari author declared AI use on 13 of his 81 letters.) Previous research has also indicated an increase in the number of prolific authors of research articles, possibly because of AI.

Many AI-written letters lack competent writing and substance, says Seth Leopold, a surgeon and researcher at the University of Washington who is editor-in-chief of Clinical Orthopaedics and Related Research. In a recent editorial, Leopold and a colleague lamented a rise in submissions of such letters. Through June of this year, the journal received 43, more than in any previous full year, and an AI-text detector flagged 21. Besides awkward syntax, these tended not to pose useful questions about the papers on which they comment; often, the limitations they identify in a study were pointed out by the paper itself, he says. “They all have the same paragraph structure, like a middle school essay.”

Some of the recently prolific letter writers identified in Chaccour and Rudd’s study are from developing countries where English is not the primary language, and Leopold acknowledges that AI programs can help nonnative English speakers with writing. But he also worries about “the mass production of junk.” As a result, when potential AI-written text is flagged in a submission, he and colleagues began asking the author to provide a verifiable quote from each cited source that substantiates the claim being made. It’s more work for the small staff of his journal, which is published by a scientific society. But “how can we not if we care about the quality of what we’re putting out there?” he says. “If we lose the confidence of the people who are reading these journals, you’ve really lost everything, and you aren’t going to get it back easily. And so this uncritical acceptance of these [AI] tools, to me, is a problem.”

AI-generated incorrect references and other errors can mislead readers and damage the reputation of authors who may have dedicated years to their research, Chaccour and his team conclude. “It took me 6 years and $25 million [in grant funding] to put out that [NEJM] paper.” But it may have taken the Qatari author only minutes to draft the letter about it. “I can’t compete with that,” he says. “The legitimate discussion risks being drowned by the synthetic noise.”


Comments

Popular posts from this blog