China will restrict “deepfakes” (realistic-looking but fake photos or videos generated by artificial intelligence technology) under new regulations coming into effect tomorrow.
Why it matters: China’s regulations mark the first major effort in the world to tackle the growing problems associated with AI capable of generating realistic media.
- The regulations will require labelling of all AI-generated content (including images, audio, and text).
- AI-generated content that regulators deem harmful to the economy or national security (a definition broad enough to mean almost anything) will also be banned.
Why it’s happening: Deepfakes have the potential to sow confusion and spread misinformation by depicting people doing or saying things they haven’t actually done or said.
- Recent leaps in the ability of AI to generate authentic-seeming content have raised the urgency of addressing its potentially harmful uses.
- There’s a risk that deepfakes could be used to persuade many people of falsehoods, potentially influencing elections or amping up social conflict.
Yes, but: There’s no consensus on how best to respond to the problems created by deepfakes and other AI-generated content.
- As you might expect, China has taken the most sweeping and hard-line approach so far, while the EU has only issued guidelines for AI companies to follow.
- In Canada, planned legislation dealing with online harms could touch on the technology.
Zoom out: There are already cases of deepfakes being weaponized to try to shape real-world events. Last year, several European politicians were duped into holding video calls with a deepfake of Kyiv’s mayor—the fake mayor asked the politicians to send Ukrainian refugees back for military service.