We are still in deep trouble with deepfakes

Despite being identified as a problem more than six years ago, methods to fight deepfakes have not kept up with the AI used to create them.

What happened: Sexually explicit, AI-created images of Taylor Swift flooded the internet last week. People distributed the images on Telegram, 4chan, and X, alongside tips to get around restrictions Microsoft put on its Designer text-to-image tool to prevent this kind of thing.

Catch-up: There are several malicious ways that deepfakes (AI-generated media showing someone doing or saying something that never happened) are used:

  • Harassment by showing people doing explicit or illicit acts. Besides celebrities, deepfakes can be weaponized against everyone else as things like revenge porn.
     
  • Disinformation, including election interference. Robocalls with a deepfaked Joe Biden attempted to suppress voter turnout in New Hampshire’s recent primaries, while China was accused of using deepfakes to influence elections in Taiwan.
     
  • Unauthorized works like a posthumous George Carlin special (and its promotional material) that drew a lawsuit from the late comedian’s estate and family.
     
  • Scams seemingly endorsed by celebrities or other public figures. A recent CTV News story about a defrauded couple was deepfaked to instead show the anchors describe an investment scam as a path to “financial independence.”

Why it matters: We still don’t have many ways to stop deepfakes from being created and distributed. Experts say laws haven’t kept up with technology, and determined users can find ways around a platform’s policies, leaving the likes of Microsoft scrambling to close loopholes.

  • Samsung’s newest phones add watermarks to images made with its AI features, but those are easily cropped out or erased with other AI tools on the phone.
     
  • Metadata could help Instagram or YouTube automatically label AI-generated content, but that’s a work in progress. Also, metadata doesn’t transfer when you screenshot an image.
     
  • Identifying deepfakes also only helps users recognize disinformation; it doesn’t prevent people from being victimized by deepfaked porn.
     
  • Meanwhile, tech companies continue to roll out more advanced AI creation tools, like Google’s Lumiere video maker and ByteDance’s voice converter.

In Canada: British Columbia’s deepfake law went into effect this week, but there is no equivalent at the federal level. Some believe deepfakes fall under existing laws against revenge porn or election disinformation (others disagree). One amendment the federal government has proposed to Bill C-27 is for visible and metadata watermarks to be one of the “best efforts” a company could take to limit risks caused by its AI system.