New Measures Aim to Address Misinformation by Marking AI-Generated Videos and Images
- Extended AI Labeling: TikTok will start labeling all AI-generated content uploaded to its platform, expanding its existing policy which already marks AI-generated content created within the app.
- Content Credentials Technology: The initiative utilizes a digital watermark technology developed by the Coalition for Content Provenance and Authenticity, involving major tech players like Adobe and Microsoft.
- Broad Industry Support: This move aligns with actions by other tech giants such as YouTube and Meta, who are also adopting similar measures to ensure content authenticity.
As concerns about the potential for AI-generated content to spread misinformation grow, especially with the looming U.S. elections, TikTok has announced a significant update to its content labeling policies. Starting soon, TikTok will label images and videos generated through artificial intelligence technologies, whether they are produced within the app or outside it. This initiative aims to enhance transparency and user awareness regarding the origins of content they consume on the platform.
TikTok’s decision to implement comprehensive AI content labeling is part of a broader industry movement towards ensuring digital content authenticity. The technology underpinning this new labeling initiative, known as Content Credentials, was developed by the Coalition for Content Provenance and Authenticity. This group includes prominent tech companies like Adobe and Microsoft, and it provides a framework that other companies can adopt. The system works by attaching a digital watermark to AI-generated images and videos, which includes data that can indicate if the content has been altered post-creation.
For instance, when a user generates an image using OpenAI’s Dall-E tool, OpenAI embeds a watermark in the image along with additional data about the image’s creation. If this image is then uploaded to TikTok, the platform will automatically label it as AI-generated based on the embedded watermark. This process not only helps in identifying the nature of the content but also in maintaining the integrity of information shared online.
Adam Presser, TikTok’s head of operations and trust and safety, emphasized that TikTok has stringent policies against unlabeled realistic AI content. Such content, if found on the platform, would be removed for violating community guidelines. This policy reflects TikTok’s commitment to combatting misinformation and maintaining a trustworthy environment for its users.
Furthermore, TikTok’s proactive stance in labeling AI-generated content is significant given its vast user base in the U.S., where regulatory scrutiny over data privacy and misinformation is increasing. Recently, legislation was introduced requiring ByteDance, TikTok’s parent company, to divest TikTok due to national security concerns—a law TikTok is currently challenging.
TikTok’s latest update to label AI-generated content is a pivotal step in the tech industry’s efforts to ensure transparency and trustworthiness in the digital space. By collaborating with other tech giants and adopting standardized technologies like Content Credentials, TikTok is setting a precedent in the fight against digital misinformation and fostering a safer online community.