While it’s pretty amazing we’ve got to the point where artificial intelligence can think critically and come up with solutions out of thin air, it can also be a pain in the behind.
What happened: The National Eating Disorder Association (NEDA), a US-based non-profit, shut down a chatbot named Tessa which answered questions about eating disorders after the company that programmed it added an unwanted AI component.
- Tessa was designed and tested as a closed system, only able to provide pre-approved info from authoritative sources and unable to veer from its script.
Cass, the company running Tessa, added an AI aspect a year after it launched, which led to the bot going rogue and giving dieting tips—something that NEDA did not want it doing.
Why it matters: Stories like this could become increasingly common. With generative AI seemingly the only tech attracting any interest these days, companies are feeling the pressure to find a way to use it—even if it’s unnecessary, or in this case, harmful.
- It’s no secret that AI still makes lots of mistakes—from fabricating court cases when doing legal research to a persistent inability to determine what a gorilla looks like.
Zoom out: Generally speaking, people don’t like having things foisted upon them—and that’s true of AI, too. A study from the IÉSEG School of Management found that workers who had positive views of AI lost enthusiasm for it when they were forced to use AI technology.—QH