Some AI-generated images posted to Facebook, Instagram, and Threads will in future be labeled as artificial. But only if they are made using tools from companies willing to work with Meta.
Meta, like other leading tech companies, has spent the past year promising to speed up deployment of generative artificial intelligence. Today it acknowledged it must also respond to the technology’s hazards, announcing an expanded policy of tagging AI-generated images posted to Facebook, Instagram, and Threads with warning labels to inform people of their artificial origins.
Malicious Loophole Hany Farid, a professor at the UC Berkeley School of Information who has advised the C2PA initiative, says that anyone interested in using generative AI maliciously will likely turn to tools that don’t watermark their output or betray its nature. For example, the creators of the fake robocall using President Joe Biden’s voice targeted at some New Hampshire voters last month didn’t add any disclosure of its origins.