YouTube today announced how it will approach handling AI-created content on its platform with a series of new policies around responsible disclosure, as well as new tools for requesting the removal of deepfakes, among other things. The company says that although it already has policies prohibiting manipulated media, AI required the creation of new policies due to its potential to mislead viewers if they are unaware that the video has been “altered or synthetically created.”
One of the changes that will be implemented involves the creation of new disclosure requirements for YouTube creators. Now, they will have to reveal when they have created altered or synthetic content that appears realistic, including videos created with artificial intelligence tools. For example, this disclosure would be used if a creator uploads a video that appears to depict a real-world event that never happened, or shows someone saying something they never said or doing something they never did.
It’s worth noting that this disclosure is limited to content that “looks realistic” and is not a blanket disclosure requirement for all AI-made synthetic videos.
“We want viewers to have context when they watch realistic content, even when AI tools or other synthetic alterations have been used to generate it,” YouTube spokesperson Jack Malon told TechCrunch. “This is especially important when the content deals with sensitive topics, such as elections or ongoing conflicts,” he said.
In fact, AI-generated content is an area that YouTube itself is venturing into. The company announced in September that it was preparing to launch a new generative AI feature called Dream Screen early next year that would allow YouTube users to create an AI-generated video or background image by typing in what they wanted to watch. We’re told that all YouTube generative AI products and features will be automatically labeled as altered or synthetic.
The company also warns that creators who fail to adequately disclose their use of AI on a consistent basis will be subject to “content removal, suspension from the YouTube Partner Program, or other sanctions.” YouTube says it will work with creators to make sure they understand the requirements before publishing. But it notes that some AI content, even if labeled, may be removed if it is used to show “realistic violence” if the goal is to shock or disgust viewers. This appears to be a timely consideration, given that deepfakes have already been used to confuse people about the war between Israel and Hamas.
However, YouTube’s warning about punitive action comes after a recent relaxation of its strike policy. In late August, the company announced that it would offer creators new ways to clear their warnings before they become warnings that could result in their channel being removed. The changes could allow creators to carefully ignore YouTube’s rules when determining when they would post infringing content, as they can now complete an educational course to have their warnings removed. For someone determined to post unapproved content, you now know you can take that risk without losing your channel completely.
If YouTube takes a softer stance on AI by also allowing creators to make “mistakes” and then repost more videos, the damage in terms of the spread of misinformation could become an issue. The company is also unclear how “consistently” its AI disclosure rules should be violated before taking punitive action.
Other changes include the ability for any YouTube user to request the removal of AI-generated content or other synthetic or altered content that simulates an identifiable individual, also known as a deepfake, including their face or voice. But the company clarifies that not all flagged content will be removed, leaving room for parody or satire. It also says it will consider whether the person requesting removal can be uniquely identified or if the video features a public official or other well-known individual, in which case “there may be a higher bar,” YouTube says.
In addition to the deepfake request removal tool, the company is introducing a new capability that will allow music partners to request removal of AI-generated music that imitates an artist’s singing or rapping voice. YouTube said it was developing a system that would eventually compensate artists and rights holders for AI music, so this seems like an interim step that would simply allow content to be removed in the meantime. YouTube will also make some considerations here, noting that content that is the subject of news reporting, analysis, or criticism from synthetic voices may be allowed to remain online. The content removal system will also be available only to labels and distributors that represent artists participating in YouTube’s artificial intelligence experiments.
AI is being used in other areas of YouTube’s business, including augmenting the work of its 20,000 content reviewers around the world and identifying new ways abuses and threats arise, the announcement notes. The company says it understands that bad actors will try to circumvent its rules and will evolve its protections and policies based on user feedback.
“We are still at the beginning of our journey to unlock new forms of innovation and creativity on YouTube with generative AI. “We are tremendously excited about the potential of this technology and know that what comes next will impact the creative industries for years to come,” reads the YouTube blog post, co-written by VPs of Product Management Jennifer Flannery O ‘Connor and Emily. Moxley. “We are taking the time to balance these benefits with ensuring the continued safety of our community at this crucial time, and will work hand-in-hand with creators, artists and others across the creative industries to build a future that benefits us all. “