AI laws begin to take shape

Governments often play catch-up on regulating new technologies, but when it comes to AI, they are beginning to make up some ground.

What happened: The EU passed its Artificial Intelligence Act, only the second set of AI laws passed by a governing body. China implemented rules for generative AI in August, but the EU’s rules are more far-reaching.

Why it matters: The EU is a trendsetter. Its GDPR rules influenced digital privacy and data laws from other governments, including Canada.

  • Many multinational companies also chose to be compliant with GDPR everywhere, instead of maintaining different standards for different jurisdictions.

In Canada: The federal government has a list of amendments planned for Bill C-27, its own effort to regulate AI, with an eye towards bringing generative systems into the fold.

  • Users must be told when they are chatting with an AI or when it was used to create media, likely with a digital watermark that could be detected by software.
     
  • Operators would have to publicly post a generative AI’s capabilities, limitations, and potential harms. They must also assess and mitigate any harms their AIs could cause, as well as report any harm they’ve detected.
     
  • The government will also provide clearer obligations tailored to each part of the supply chain. For example, machine learning developers must maintain a model card, while AI operators will need to shut down a system if they think harm can’t be avoided.

Where it’s different: On top of reporting requirements, the EU has also completely banned any AI that identifies people based on race, sexual orientation, religion or political beliefs, as well as scraping images from the internet to train facial recognition systems.

Yes, but: The EU’s law doesn’t come into force until 2025, and Bill C-27 still hasn’t passed the House of Commons. That gives companies a long time to bring themselves in line — or for reckless ones to get away with some damage.