YouTube now requires creators to disclose when AI-generated content is used in videos
YouTube now requires creators to disclose when AI-generated content is used in videos
You're viewing a single thread.
Creators must disclose content that:
Makes a real person appear to say or do something they didn’t do
Alters footage of a real event or place
Generates a realistic-looking scene that didn’t actually occur
So, they want deepfakes to be clearly labeled, but if the entire video was scripted by chatgpt, the AI label is not required?
38 2 ReplyGenerates a realistic-looking scene that didn’t actually occur
Doesn't this describe, like, every mainstream live action film or television show?
35 1 ReplyTechnically, yes... but if it's in movie/show, you already know it's fiction
11 1 ReplyBold of you to assume that everyone knows movies and shows aren't real.
11 0 Reply
Yeah, but this doesn't put any restrictions on stuff, it just adds a label to it.
3 0 ReplyAlso a lot of video game footage from livestreams, etc.
1 0 Reply
this is going to be devastating for all the prank youtube channels
6 0 ReplyWouldn't this enable, for example, Trump claiming he didn't make the "bloodbath" comment, calling it a deepfake, and telling Youtube to remove all the new coverage of it? I mean, more generally, what stops someone from abusing this system?
1 0 Reply