Don’t Count on Watermarking to Prevent Artificial Intelligence Deepfake Election Chaos
Tagging artificial intelligence-generated images with invisible markers may not be the airtight solution the federal government wants to ensure that people can identify false images, avoid scams, and avert a feared wave of misinformation in the elections.
Watermarking, as it is known, is a technology the White House and AI developers have upheld as an integral tool for combatting false information in the 2024 elections and the creation of fake images such as those disseminated by the DeSantis presidential campaign depicting former President Donald Trump hugging Dr. Anthony Fauci.
Facebook parent company Meta announced this week it would start labeling AI-generated images on its platforms and that it would use built-in watermark detection tools to determine if an image was synthetic. OpenAI also added watermarks to its DALL-E image generator so the photos could be easily identified. The goal was to prevent “deepfake” images from deceiving the public. But these tools may have their limits, according to industry experts.
Watermarks “can be quite vulnerable and unreliable in practice, meaning that watermarking signals can be erased quite effectively from whatever AI-generated content you have in text and images,” Soheil Feizi, an associate professor of computer science at the University of Maryland, told the Washington Examiner.
Major AI developers such as Meta, OpenAI, and Adobe are working together to adopt common watermarking standards that will allow them to identify quickly if an image is AI-generated. These standards, as defined by the Coalition for Content Provenance and Authenticity, add a “content credential” to an image that offers additional information about its origin, edited state, and other details. These data are invisible to the human eye but detectable by software.
Anne Neuberger, deputy national security adviser for cyber and emerging technology at the White House, announced at an event last week that the Biden administration held an event on “building defenses to counter AI-driven voice cloning” and that it was exploring how to watermark content.
Some companies are also trying to add data to photos taken by a camera to help establish that the images are not AI-generated.
But Feizi and other academics have found ways around such technologies. Feizi released a study in October in which his research team successfully stripped the vast majority of watermarks from AI-generated images through simple techniques.
Feizi said “adversarial actors” such as China or Iran could easily strip AI watermarking from images and videos created by AI. They could also “inject some signal into real images so that those watermarking detectors will detect those images as watermarked images,” he said.
Watermarks may also be lost as images, videos, or audio are transferred or copied, according to Vijay Balasubramaniyan, CEO of the voice verification service Pindrop. The startup was one of the first to identify the company behind a series of robocalls in which an AI-generated voice recording of President Joe Biden discouraged Democrats in New Hampshire from voting in the primaries.
Balasubramaniyan told the Washington Examiner that the more an image or audio is copied, the more diluted the initial watermarks may be. “As audio gets added or music gets added to [the recording], as it gets rerecorded, as it gets transmitted through different channels, it loses a lot of the watermark,” Balasubramaniyan told the Washington Examiner.
There aren’t many alternatives to watermarking AI-generated images yet. Balasubramaniyan said his software is a better bet than watermarking for detecting AI-generated voice audio.
Feizi also encouraged social platforms to link to the source of images so users can determine whether a source is a malicious actor.
It is possible that researchers may be able to find a way to add watermarks that cannot be stripped away after additional copies or edits, but the technology, as of January 2024, is not ready.
This article was published by The Washington Examiner.