The fallout at OpenAI escalated from “spicy corporate drama” to “arbiter of humanity’s fate” real quick.
Driving the news: Several OpenAI staffers allegedly sent a letter to the board of directors days before Sam Altman’s abrupt dismissal, warning about a new development with the potential to threaten human existence, per reporting by Reuters and The Information.
- Here’s where it gets complicated: Neither outlet has seen the document, and a separate source told The Verge no such letter exists or played a role in Altman’s exit.
The groundbreaking development? Reportedly, an AI model dubbed Q* (pronounced Q-Star) that can solve grade school-level math problems with a high level of accuracy.
Why it matters: Solving the same math problems as kids who are still eating glue doesn’t sound impressive, but it totally is. AI models are great at recognizing patterns and writing sentences, but they struggle to answer questions with one right answer, like math equations.
- When you ask ChatGPT to solve an equation, it’s simply sourcing the answer from the data it’s trawled, not solving it, which leads to a lot of wrong answers.
Big picture: If OpenAI did develop a model with the “logic” for basic math, it could be the first step toward achieving artificial general intelligence (AGI) — defined by OpenAI as the point when AI systems surpass human performance in the most economically viable tasks.
Yes, but: Some experts worry that AGI could lead to a Skynet scenario, due to either bad actors getting ahold of the tech or the AI deciding its best interest is to annihilate humanity. This worry has grown now that the safety-first board keeping Altman in check is gone.—QH