![design AI tools design AI tools](https://techstrong.ai/wp-content/uploads/2023/10/design.jpg)
Medium, starting next month, will no longer allow AI-generated content in its Partner Program and using the fast-emerging technology could get writers tossed from the program altogether.
The online publishing platform sent users emails alerting them to the change, writing that Medium “recently defined and clarified out specific policies around the different uses of AI-generated content and policies, and what is allowed in the Medium Partner Program.”
“Medium is for human storytelling, not AI-generated writing,” the company wrote in the message, which was posted on X (formerly Twitter) by user Jonathan Gillham.
Medium becomes the latest organization wrestling with the issue of what to do with content that is created using generative AI tools like OpenAI’s ChatGPT, Google’s Gemini, and Anthropic’s Claude. Beyond such issues as bias or hallucinations – the tendency of generative AI models to put inaccurate or false information into their outputs if they don’t know an answer – Medium and other companies that rely on content are trying to figure out how to let people know if the content was created by humans or large language models.
Disclosed or Not Disclosed
In this case, Medium appears to be coming down hard on users who rely on generative AI tools to create content, even if such content is disclosed. Starting May 1, such content won’t be allowed in the Partner Program, which gives writers an avenue for making money from their work. For example, program member-only stories can earn money if a member reads the story for 30 seconds or more and will earn more money the longer the person reads it.
“Long, thoughtful, reads are encouraged,” the company says on its program site.
Writers also can earn money when members clap, highlight or otherwise engage with their stories or when stories are “boosted,” which gives them an even greater chance of getting read.
This won’t happen for AI-created content, according to Medium, which has more than 63 million registered users.
“Accounts that have fully AI-generated writing behind the paywall may have those stories removed from the paywall and/or have their Partner Program enrollment revoked,” the company said.
An Evolving Approach to AI Content
The new rules for the Partner Program add to what Medium already had in its guidelines for writers. Last year, after surveying Medium writers, the organization said it welcomed AI-assisted writing as long as it was clearly labeled as such to ensure transparency, though Scott Lamb, editor of the Medium blog, stressed that it was just an “initial approach,” adding that “as this technology and its use continue to evolve, our policies may, too.”
Lamb also wrote that if there is content that the editors believe was created by AI but wasn’t disclosed, it won’t be distributed on Medium’s network.
The guidelines call for the writer to say within the first two paragraphs that AI was used to assist in the writing of the story and captions for photos should say whether the image was created with AI and is properly sourced. If such disclosures don’t happen, the story will only be distributed on the writer’s personal network.
Transparency a Challenge
Others also are figuring out how to handle the issue of transparency. The day before Medium sent out its email, Google said it was updating its Merchant Center to require sellers to identify text that was created using generative AI. Merchant Center is a tool aimed at helping companies reach shoppers on Google.
Last fall, high-profile video hosting service TikTok said it is enabling creators to label content that was created with AI, a most similar to what Meta-owned Instagram does. In addition, TikTok was evaluating ways to make such labeling automatic.
“AI enables incredible creative opportunities, but can potentially confuse or mislead viewers if they’re not aware content was generated or edited with AI,” TikTok executives wrote in a blog post at the time.
In February, Microsoft, Google, Meta, OpenAI, Apple and other companies joined the AI Safety Institute Consortium (AISIC), a public-private group of more than 200 organizations created by the federal government to help create standards and tools to mitigate the risks that come with the development of AI and machine learning technologies.
Housed within the U.S. National Institute for Standard and Technology (NIST), the consortium’s goal will be to work on protocols and guidelines in a range of areas, including watermarking synthetic content so people can know if they’re looking at or listening to something created by AI, a proposal that has its share of supporters and detractors.