Companies like OpenAI, Google, and Microsoft aren’t the only ones in an AI race. Regulators around the world are starting to get in on the action, too.
Driving the news: US lawmakers, along with regulators in the EU and China, are starting to roll out proposals to regulate artificial intelligence companies and technology.
In the US, Senate Majority Leader Chuck Schumer is spearheading legislation that would require companies to identify who worked on training AI models, reveal where their data came from, and explain how their systems come up with their responses.
In the EU, proposed regulations for AI in “high-risk” use cases like law enforcement are closer to implementation than in the US, and some policymakers are pushing for more expansive rules that would also capture tools like ChatGPT
- In China, draft guidelines on AI rules released by the country’s internet regulator will force AI companies to undergo a security review and make them responsible for the content their systems generate. China has already banned deepfakes made without the consent of their subject.
Why it matters: The rapid pace of AI development is pushing governments around the world to react quickly, and officials are increasingly expressing alarm that they have been caught flat-footed by tools like ChatGPT.
- “Something is coming. We aren't ready,” US Senator Chris Murphy tweeted recently.
Bottom line: Governments that want to regulate AI will need to overcome the grindingly slow pace at which lawmaking typically moves to keep up with the speed of technological change.