America explores AI regulation

If anything can bring two sides of the aisle together, it's regulating artificial intelligence. 

What happened: Sam Altman—who one could argue is the guy responsible for opening the AI floodgates by releasing ChatGPT to the public—appeared before Congress yesterday to call on US regulators to create licensing and safety standards for advanced AI systems. 

  • In recent weeks, researchers and executives have called to pause AI development, and AI pioneer Geoffrey Hinton quit his job at Google to speak about the risks of AI. 

Catch up: Last week, EU lawmakers came closer to passing the highly anticipated AI Act. It’s set to become the world’s first set of laws specific to governing the technology, complete with rules around facial recognition, biometric surveillance, and other AI applications.

  • In Canada, a combination of laws across human rights, tort, intellectual property, and privacy partially regulates the industry, but there are no regulations specifically for AI.

Why it matters: The success of ChatGPT has sparked an industry race that is giving billions of people access to increasingly powerful, but imperfect, and also unregulated AI, with companies like Microsoft and Google moving quickly to launch new generative AI chatbots. 

  • Lawmakers are generally growing worried about how AI will transform upcoming elections (especially concerns over fake ads), the job market, and overall security.
     
  • Altman told Congress: “..we can and must work together to identify and manage the potential downsides [of AI] so that we can all enjoy the tremendous upsides.” 

Bottom line: The lawmaking process moves at a snail’s pace relative to developments in AI tech, and governments are scrambling to catch up. Altman’s calls for them to hurry along underscores how leaders in the space (who have the vast resources needed to comply with new laws) may gain an edge over their competitors thanks to stricter regulation.—SB