Canadian tech agrees to AI code of conduct

While not quite as star-studded at the White House’s AI summit, the who’s who of Canadian tech companies have agreed to new rules concerning AI. 

What happened: A handful of Canada’s biggest tech companies, including Blackberry, OpenText, and Cohere, agreed to sign on to new voluntary government guidelines for the development of AI technologies and a “robust, responsible AI ecosystem in Canada.”

  • The guidelines commit the companies to subject AI systems to risk assessments, protect them from cyberattacks, and receive human oversight once implemented.
  • They also commit signatories to creating and using AI systems inclusively and sustainably while addressing the potential discriminatory impacts of AI algorithms.  

What’s next: The code of conduct is something of a stopgap until the government’s *real* AI regulation, the Artificial Intelligence and Data Act (AIDA), comes into effect in two years.

  • The regulation race is on around the globe. The EU is widely viewed as leading the way with the world’s first comprehensive regulatory AI framework set to take effect in 2026. The US is also hard at work, but also only has a voluntary code in place.  

Yes, but: Despite new amendments to AIDA to better define what systems it covers and to introduce specific obligations for wide-ranging AI tools like chatbots, there are fears that by the time it comes into force, it will already be out-of-date. 

  • Why the concern? Look no further than the relentless flurry of AI news that dropped this week, from ChatGPT getting a voice to Meta’s plan to roll out “sassy” chatbots.  

Bottom line: AI could be a boon for Canada — adding $210 billion to the economy if Google is to be believed — but it comes with big risks. Whatever the final version of AIDA is, AI companies will likely be left to mostly self-regulate to mitigate any hiccups.—QH