1 like 0 dislike
in General Moderation by Newbie (250 points)
In the past couple of months, many social media platforms have claimed that some major news websites have been publishing articles that have been generated by AI and not labeling them as AI written. These posts can often be labeled by lesser known news outlets and are norming generic like "breaking news" stories that don't really have an author byline or odd phrasing. Some of this talk has been boosted up by users worried about how AI might reduce journalism standards.

While some of this may be true, there are many littler sites and some news websites are using AI generated content. For example, a study from Cornell shows that synthetic news articles increased drastically on smaller sites between 2022-2023 but many mainstream news sources still had a lower usage of AI content. So that means this claim is misleading and conflates the real question on whether AI written content with a broad assertion that major outlets are doing it in secrecy and at a scale when evidence is lacking.

2 Answers

0 like 0 dislike
ago by Novice (610 points)
selected ago by

This headline is somewhat misleading. Data suggests that AI-generated content is much less common in major news articles, compared to opinion pieces and smaller news outlets. Research led by University of Maryland computer scientists found that more than 9% of U.S. newspapers contain some form of text created by AI. However, only 1.7% of articles in papers with circulation of more than 100,000 were partially or fully AI-generated. It was also found that AI-generated content is more common among the opinion pages rather than the news pages themselves, with this data being pulled from articles from The New York Times, The Wall Street Journal and The Washington Post. 

Additionally, they found out that much of this AI content is much more common in smaller, more local news outlets. Researchers claim that this could be from “collapsing news economies” in these smaller communities, along with their news deserts, where access to local and credible news coverage is limited. This causes a lot of trust issues from the perspective of the reader, as it’s hard to know what is true, but also because truth-telling and transparency are necessary for journalists. 

Yes, AI usage is apparent throughout the journalism industry. But the reality is, most of the AI-generated content comes from smaller news outlets or opinion articles, which show significantly higher percentages than the actual news articles. I think that the bigger concern is that 95 percent of these articles are using AI without addressing it.

https://today.umd.edu/report-ai-use-in-newspapers-is-widespread-uneven-and-rarely-disclosed

Exaggerated/ Misleading
0 like 0 dislike
ago by Newbie (300 points)

After researching this claim, I found that it is misleading. There is evidence that AI-generated content has increased online, especially on smaller or lesser-known websites. However, there is not strong evidence showing that major, well-established news organizations are secretly publishing large amounts of AI-written articles without disclosure. The claim mixes real concerns about AI in journalism with assumptions that are not fully supported by available research. 

This study examined the rise of AI-generated news articles and found a noticeable increase between 2022 and 2023, particularly on smaller and lower-credibility sites. It did not conclude that major mainstream outlets were heavily relying on AI-generated articles without disclosure. https://arxiv.org/abs/2305.09820

The Associated Press has publicly discussed its policies around AI, explaining that AI tools may be used for specific tasks but that editorial standards and transparency are maintained. This shows that major outlets are not secretly publishing AI-written articles without oversight. https://apnews.com/hub/artificial-intelligence

Researchers studying AI-generated content are interested in identifying risks to information quality, which may emphasize growth trends. Social media users spreading these claims may be influenced by fear about AI replacing journalists. At the same time, major news organizations have incentives to protect their credibility, which explains why they are cautious about how AI is used and disclosed.

There is evidence that AI-generated articles exist online and that some websites publish generic news content without clear authorship or transparency. Smaller sites, in particular, appear to rely more heavily on automated or synthetic content.

Research shows that most AI-generated news content appears on smaller or lower-credibility sites, not major news outlets. There is limited evidence that mainstream organizations are secretly publishing large volumes of unlabeled AI-written articles. The claim exaggerates the scale and scope of AI use by major outlets.

This claim appears to be based on general social media discussion rather than a specific individual or organization, so there was no clear original source to contact.

Exaggerated/ Misleading

Community Rules


• Be respectful
• Always list your sources and include links so readers can check them for themselves.
• Use primary sources when you can, and only go to credible secondary sources if necessary.
• Try to rely on more than one source, especially for big claims.
• Point out if sources you quote have interests that could affect how accurate their evidence is.
• Watch for bias in sources and let readers know if you find anything that might influence their perspective.
• Show all the important evidence, whether it supports or goes against the claim.
...