Google’s new AI model has an image problem

Google’s newest AI model may be the company’s “most capable” yet, but it might need to re-take some history classes. 

What happened: Google has paused its Gemini AI model from generating images of people after it produced inaccurate gender and racial depictions of historical figures — a flaw the company says was an unintended consequence of prioritizing diversity in the model’s training. 

  • After launching just last week, the model churned out AI-generated images that included a female pope, Asian Nazi soldiers, and a Black U.S. Founding Father. 

Why it’s happening: Generative AI models are designed to spot patterns and make predictions. That makes systems like Gemini, which are trained to incorporate diversity, prone to some absurd ‘hallucinations’ when predicting the look of real historical figures. 

Why it matters: As major AI players race to release tools and prove that their investments are paying off, glitches like this could be the trade-off for quickly launching products before all the kinks are ironed out. 

  • Leading AI models still ‘hallucinate’ at a rate of between 3% and 8%, while Google’s old AI model, PaLM 2 (its first challenger to ChatGPT), had the highest hallucination rate at 27%. 

What’s next: Google says it will re-release an improved version of the feature once the problems are fixed, but after declaring this model as its best AI system yet, this high-profile setback shows that even the most sophisticated models still have a long way to go.—LA