ChatGPT is a mean gossip

Ever get the feeling people are talking about you behind your back? It’s probably nothing… but a growing number of people have discovered that AI bots are talking about them—and they aren’t saying very nice things.

Driving the news: A local politician in Australia says ChatGPT falsely claimed that he was convicted of bribery, and has begun the process of launching a defamation lawsuit against the tool’s maker, OpenAI. 

  • If the lawsuit moves ahead, it will be the first defamation suit OpenAI has faced over content produced by its technology.

It’s not the only case in which ChatGPT has made up false allegations about a public figure. The chatbot also (falsely) claimed a law professor in California sexually harassed a student, citing a non-existent Washington Post article as evidence.

Why it matters: AI systems still struggle with factual accuracy, and as more people use them as search tools, their tendency to confidently assert plausible-sounding lies (including about other people) is becoming a more serious problem.

  • The same phenomena could also pose risks as AI is applied in other fields, like medicine, where false results could lead to catastrophic outcomes.

Why it’s happening: Current versions of large language models (the technology that undergirds AI tools like ChatGPT) are prone to these types of “hallucinations,” and top researchers don’t agree on whether they’ll ever stop hallucinating.

  • OpenAI’s chief scientist Ilya Sutskever has said he believes hallucinations will eventually disappear as the technology improves.
     
  • On the other hand, Meta’s chief AI scientist Yann Lecun argues that the hallucination problem is inherent to large language models, which “have no experience of the real world.”

Zoom out: Beyond eroding people’s trust in AI tools, lying chatbots could also pose a legal risk to the companies that make them. Tech companies are generally shielded from liability over defamatory content shared on their platform by third parties, but it’s an open question whether that protection extends to things their own chatbots say.