As you might know, artificial intelligence was the buzzword in 2023 because of its major potential to help humans in their daily lives and at work. It is needless to say that AI-generated content is getting better and better. It's absolutely crazy what we can do today in just one click. That said, the use of AI can already be seen in several fields, such as medicine, manufacturing processes, automotive, education, and so on.
With easy accessibility and mass adoption of AI tools in daily lives, we have seen unforeseen growth of deep fake images and videos that the world was not ready for. Social media platforms, already struggling with misinformation, now also have to deal with this double-edged sword (referring to AI) that tech giants have created.
Meta, maybe late to the party, is now among the tech giants with players like OpenAI, Microsoft, and Google, that joined the race to make their AI tools mainstream. So, to tackle the rapidly growing deepfake issue, as mentioned above; Meta has started adding a ‘Generated by AI’ label to the content coming out of their text-to-image generation tool.
In a recent blog post, Meta’s Global Affairs President, Nick Clegg, announced that they are taking this issue seriously and have already started labeling all images generated by their text-to-image generation tool as "Imagined with AI," which is greatly appreciated by users across the globe.
Nick Clegg stated, “We want to be able to do this with content created with other companies’ tools too. That’s why we’ve been working with industry partners to align on common technical standards that signal when a piece of content has been created using AI.”
Further, Meta has already started working with some companies in the industry to develop common standards for identifying AI-generated content through forums like the Partnership on AI (PAI). The invisible markers Meta uses for AI images – IPTC metadata and invisible watermarks – are in line with PAI’s best practices. The tech giant also believes that other software giants with generative AI tools, like Google, OpenAI, and Midjourney, should also join the league, which will allow internet users to know which content is AI-generated.
Clegg also wrote, “If we determine that digitally created or altered image, video, or audio content creates a particularly high risk of materially deceiving the public on a matter of importance, we may add a more prominent label if appropriate, so people have more information and context."
Meta plans on adding a feature to Facebook, Instagram, and Threads where people uploading audio and video content will have the option to add a watermark to disclose if the post is generated using AI because their tools cannot identify audio and video content developed by AI tools that do not have any built-in metadata.