ChatGPT doesn’t want to write a poem about Donald Trump, and its makers don’t know why.
Driving the news: People are turning up examples of what they call political bias in ChatGPT’s answers to certain prompts, accusing the chatbot’s maker, OpenAI, of programming it with a partisan leaning.
- A viral tweet last week showed ChatGPT refusing to write a poem in praise of Donald Trump, even though it wrote one for his political rival, Joe Biden.
- ChatGPT told another user that it can’t discuss controversial figures when prompted to write a poem about a different right-wing politician.
Why it’s happening: ChatGPT is programmed to refuse to answer certain prompts (it’s not supposed to say that doing drugs is cool, for example), but most of its responses are the product of text from across the internet that it was trained on.
- The training process itself, however, involved human workers who rated ChatGPT’s responses to different questions, and that may have somehow led the system to avoid certain political topics.
- OpenAI CEO Sam Altman admitted last week that ChatGPT “has shortcomings around bias,” but OpenAI’s engineers don’t know why or how to fix it.
Why it matters: ChatGPT’s technological sophistication has prompted a frenzy of investment into AI over the past several weeks. Now, it’s becoming clear that not even the engineers who made ChatGPT fully understand how it works.