The EU leads the charge to rein in AI

While some of us are still not entirely sure what a large language model is, EU lawmakers have gone ahead and passed the world’s first comprehensive AI regulation act.

Catch-up: The Artificial Intelligence Act — we love legislation with to-the-point titles — has been contentious, but it should be finalized in May, with implementation beginning next year. It stands to set the tone for global AI regulation and influence Big Tech developments.

  • Countries like France and Germany are still worried that it will stifle AI innovation, while other critics feel the act is too vague about matters like copyright protection.

How does it work? The law splits up types of AI into different risk categories, ranging from no-to-low-risk tech, which has no restrictions (like AI-powered video games), to “unacceptable” tech, which includes AI used for social scoring, emotion recognition, or crime prediction based on profiling. 

  • High-risk tech — which includes a wide array of AI tech used in sensitive areas like law enforcement and critical infrastructure — will be closely observed and required to be accurate, overseen by humans, and subject to risk assessments and usage logs.

  • Misbehaving companies will face big fines. For the ultimate sin of using or developing banned AI tools, they could pay as much as €35 million or 7% of their total global revenues.  

Why it matters: The EU has become a global influencer when it comes to tech regulation, and once again, it’s gotten legislation over the line before anyone else. For countries and blocs working on their own AI regulations, they will likely look to the EU for potential ideas.  

In Canada: The federal government said it is working to amend Bill C-27 — a proposed suite of tech laws that includes comprehensive AI regulation — after watchdogs and Big Tech alike complained that it lacks risk-level distinctions and is already outdated.—QH