Some of Substack’s Biggest Newsletters Rely On AI Writing Tools

0
0
some-of-substack’s-biggest-newsletters-rely-on-ai-writing-tools
Some of Substack’s Biggest Newsletters Rely On AI Writing Tools

The most popular writers on Substack earn up to seven figures each year primarily by persuading readers to pay for their work. The newsletter platform’s subscription-driven business model offers creators different incentives than platforms like Facebook or YouTube, where traffic and engagement are king. In theory, that should help shield Substack from the wave of click-courting AI content that’s flooding the internet. But a new analysis shared exclusively with WIRED indicates that Substack hosts plenty of AI-generated writing, some of which is published in newsletters with hundreds of thousands of subscribers.

The AI-detection startup GPTZero scanned 25 to 30 recent posts published by the 100 most popular newsletters on Substack to see whether they contained AI-generated content. It estimated that 10 of the publications likely use AI in some capacity, while seven “significantly rely” on it in their written output. (GPTZero paid for subscriptions to Substack newsletters that are heavily paywalled.) Four of the newsletters that GPTZero identified as using AI extensively confirmed to WIRED that artificial intelligence tools are part of their writing process, while the remaining three did not respond to requests for comment.

Many of the newsletters GPTZero flagged as publishing AI-generated writing focus on sharing investment news and personal finance advice. While no AI-detection service is perfect—many, including GPTZero, can produce false positives—the analysis suggests that hundreds of thousands of people are now regularly consuming AI-generated or AI-assisted content that they are specifically subscribing to read. In some cases, they’re even paying for it.

“It’s hard not to be a little surprised,” says GPTZero cofounder and CTO Alex Cui about the results. “These are all prominent authors.” As a comparison, Cui cited another analysis that GPTZero ran on Wikipedia earlier this year, which estimated that around one in 20 articles on the site are likely AI-generated—about half the frequency of the posts GPTZero looked at on Substack.

Not everyone is shocked by how frequently generative artificial intelligence is used in certain pockets of the platform. “Makes total sense to me,” says Max Read, author of the internet and technology Substack newsletter Read Max, who views some financial news publications on Substack as “the slightly upmarket version of hustle-culture YouTubers.”

Helen Tobin, Substack’s head of communications, declined to comment directly on GPTZero’s findings. “We have several mechanisms in place to detect and mitigate inauthentic or coordinated spam activities, such as copypasta, duplicate content, SEO spam, phishing, and bot activity—many of which can involve AI-generated content,” Tobin told WIRED in an email. “However, we don’t proactively monitor or remove content solely based on its AI origins, as there are numerous valid, constructive applications for AI-assisted content creation.”

Substack does not have an official policy governing the use of AI. One of Substack’s cofounders, Hamish McKenzie, has described the generative AI boom as a sea change that writers will need to confront, regardless of their personal views on the tech: “Whether you’re for or against this development ultimately doesn’t matter. It’s happening,” he wrote in a Substack post last year.

Several of the Substack authors WIRED spoke to emphasized that they used AI to polish their prose rather than to generate entire posts whole cloth. David Skilling, a sports agency CEO who runs the popular soccer newsletter Original Football (over 630,000 subscribers), told WIRED he sees AI as a substitute editor. “I proudly use modern tools for productivity in my businesses,” says Skilling. “AI-detection tools may detect the use of AI, but there’s a huge difference between AI-generated and AI-assisted.”

Subham Panda, one of the writers of Spotlight by Xartup (over 668,000 subscribers), which covers news about startups around the world, said that his team uses AI as an “assistive medium to help us curate high-quality content faster.” He stressed that the newsletter primarily relies on AI to create images and to aggregate information and that writers are responsible for the “details and summary” contained in their posts.

Max Avery, a writer for the financial newsletter Strategic Wealth Briefing With Jake Claver (over 549,000 subscribers), says he uses AI writing software like Hemingway Editor Plus to polish his rough drafts. He says the tools help him “get more work done on the content-creation front.”

Financial entrepreneur Josh Belanger says he similarly uses ChatGPT to streamline the writing process for his newsletter, Belanger Trading (over 350,000 subscribers), and relies on the chatbot Claude to help him copyedit. “I will write out my thoughts, research, things that I want included, and I will plug it in,” he says. Belanger also creates custom GPTs (versions of ChatGPT tailored for specific tasks) to help polish more technical writing that includes specific jargon, which he says reduces the number of hallucinations the chatbot produces. “For publishing in finance or trading, there are a lot of nuances … AI’s not going to know, so I need to prompt it,” he says.

Compared to some of its competitors, Substack appears to have a relatively low amount of AI-generated writing. For example, two other AI-detection companies recently found that close to 40 percent of content on the blogging platform Medium was generated using artificial intelligence tools. But a large portion of the suspected AI-generated content on Medium had little engagement or readership, while the AI writing on Substack is being published by powerhouse accounts.

Substack is often portrayed as an alternative to the mainstream media, but the presence of AI-generated writing is something it shares with many traditional news websites. In some cases, at outlets including Sports Illustrated, CNET, and the AV Club, readers and other journalists have uncovered articles that appeared to be entirely crafted by AI. Generative AI has also been incorporated into news products in other ways; most recently The Wall Street Journal announced it was testing AI-generated article summaries, and the Associated Press has used some form of AI for specific story types for a decade.

Some readers either don’t notice or aren’t bothered when writers they love embrace AI tools. GPTZero’s findings indicate that plenty of people are consuming and enjoying newsletters written with the help of AI, and other writers may soon try to replicate their success by adopting the technology as well.

But that doesn’t mean there won’t be backlash or pushback. GPTZero is launching a free “certified human” badge for bloggers to display, anticipating a future where guaranteeing that you don’t use AI becomes an important selling point. This type of disclaimer is already appearing in other creative industries. The credits of the new A24 horror movie Heretic, for example, included a disclosure: “No generative AI was used in the making of this film.”

Over the next few years, similar badges and seals asserting that creative works are 100 percent human may proliferate widely. They could make worried consumers feel like they’re making a more ethical choice, but seem unlikely to slow the steady seep of AI into the media and film industries.

LEAVE A REPLY

Please enter your comment!
Please enter your name here