If your favourite AI image generator starts spitting out some weird stuff, we may have an explanation.
Driving the news: Researchers successfully created a method to trick image-generating AI models like DALL-E into wrongly characterizing images during their training, according to their recently published paper.