Nearly nine months after the release of ChatGPT, Canada is progressing toward increasing safety and transparency around generative AI.
Driving the news: The federal government is currently pulling together a voluntary code of conduct that could commit firms to safety measures, testing, and disclosures, per The Logic.
- Possible measures could include watermarking AI-generated content to differentiate it from human-created content and aggressively testing AI systems to find flaws.
Why it matters: Canada’s proposed Artificial Intelligence and Data Act could take years to finalize and turn into law, but the code will warm up companies for incoming AI legislation.
Amazon, Google, Microsoft, Meta, and OpenAI have signed on to a similar program in the US as the government prepares to sign AI legislation into law later this year.
- The EU is also looking to pass a voluntary code of conduct as the bloc’s 27 countries work together to develop a universal set of laws—a process that could take years.
Yes, but: Regulators are playing catch-up with a sector that’s racing to improve and update AI models, meaning that new rules could be outdated by the time they become enforceable.
Bottom line: With the plan in its consultation stage (and legislation a while away), it’s too soon to predict what AI regulation in Canada will look like and how effective it will be.—LA