– YouTube has implemented new rules requiring creators to disclose if their videos feature altered or synthetic content, including generative AI
– The disclosure can be flagged when uploading content, with options to specify the nature of the alterations
– Labels will be added to videos covering sensitive topics, and enforcement measures may include automatic labeling for consistently undisclosed AI-generated content
YouTube has announced new rules requiring creators to disclose if their videos feature altered or synthetic content, including generative AI. This decision comes amidst concerns about the spread of misinformation on the platform. The feature has been added to the Creator Studio, allowing creators to flag videos with generated content upon upload.
Creators will need to flag videos that could be mistaken for genuine footage, such as those featuring altered real events or realistic-looking events that did not occur. The labeling process is a simple yes/no option in the content settings, with examples provided by YouTube. Labels will be prominently displayed on videos covering sensitive topics like healthcare, elections, finance, and news.
Enforcement measures for failing to disclose AI-generated content have not been fully outlined, but YouTube may automatically add labels to videos that consistently mislead viewers. The new labels will be rolled out in the coming weeks on the YouTube mobile app, with plans to expand to desktop and TV platforms. It is important for viewers to be aware of these labels in order to identify potentially misleading content in the ever-growing landscape of AI misinformation.