Pumping the breaks on AI development

Kind of like us at an all-you-can-eat sushi restaurant, AI is doing too much way too fast. 

What happened: Over 1,100 notable signatories, including Elon Musk and Apple co-founder Steve Wozniak, have signed an open letter calling for a six-month pause on training AI systems more powerful than GPT-4, the latest tech from the sector’s leader OpenAI.

  • The signees want to press pause on AI development until a set of shared safety standards exist, that can be audited and overseen by outside experts.

A short portion of the letter reads: “Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us? Should we risk the loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders.” 

Why it matters: AI has advanced rapidly over a short period—too rapidly for some experts’ liking. They fear companies clamouring for the next big AI thing will ignore ethical considerations and guardrails along the way, causing potential risks to society at large.

Yes, but: Some experts have been critical of the letter. One AI researcher tweeted that the letter fuels AI hype and actually “makes it harder to tackle real, already occurring AI harms.” Meta’s chief AI scientist also said that he disagrees with the letter’s premise. 

In Canada: The feds are moving very slowly towards putting AI rules in place. Last year, the feds proposed a bill that would create Canada’s first rules for the responsible development of AI, appoint a national AI commissioner, and enact criminal penalties for those that misuse AI.—QH