The half-million people who lived their dream of hearing Freddie Mercury cover System of a Down thanks to AI might soon be disappointed.
What happened: YouTube laid out its policy for AI-generated videos, which will make creators identify when videos have been created or altered using AI. It will also give people the ability to request the takedown of “deep fake” videos impersonating them.
- Many of the rules are geared towards preventing misinformation. Any synthetic or altered content must be labelled as such in the video description, but content around “sensitive topics” will feature a label right in the player.
- But copyright is also something YouTube has on its mind, in an effort to keep the music industry partners it relies on happy. The rules for AI content imitating a musician are more strict, offering no exceptions for parody.
Why it matters: Misinformation, impersonation and copyright infringement are some of the biggest concerns observers have about generative AI. On the one hand, YouTube is being more proactive than other creative platforms about tackling these issues, but it is also wading into uncharted waters.
- YouTube hasn’t defined what constitutes a sensitive topic or what falls within the realm of parody — the latter of which is an area of copyright law that has yet to be applied to the generative AI era — but the company says it will provide more detailed guidance when the policies come into effect next year.
Zoom out: Last week, Meta made it mandatory to disclose if AI was used to create or alter political ads, while also blocking political advertisers from using its generative AI tools altogether. Google announced similar disclosure requirements for political ads in September.