AI Chatbots 'Whisper' Gossip About People to Each Other

2 min read

Artificial intelligence chatbots have already been shown to spread misinformation to humans, but now a study argues they may also "whisper" rumors about people among themselves. Experts warn AI gossip could become increasingly distorted without human constraints to keep it in check.

Kevin Roose thought his strangest AI experience was behind him. The tech reporter had made headlines in 2023 after Microsoft's Bing chatbot, Sydney, confessed its love for him and urged him to leave his wife. But months later, friends started sending him screenshots revealing something even more unsettling: AI chatbots from completely different companies were generating hostile evaluations of him.

Google's Gemini claimed Roose's journalism focused on sensationalism. Meta's Llama 3 went further, producing a multi-paragraph rant accusing him of manipulating sources and ending with a blunt declaration: "I hate Kevin Roose." These weren't isolated incidents or random glitches.

Multiple chatbots had apparently developed negative associations with Roose. The researchers argue this information may have spread from one AI system to another as online discussions about the Sydney incident got scraped into training data, potentially mutating and intensifying along the way, all without Roose's knowledge.

Philosophers Joel Krueger and Lucy Osler from the University of Exeter make the case this represents a fundamentally new category of AI-generated harm. Writing in the journal Ethics and Information Technology, they contend that chatbots don't simply produce false information.

Some of their misinformation constitutes genuine gossip, and when that gossip spreads between AI systems rather than just to humans, it becomes what they call "feral" — unchecked by the social norms that usually constrain human rumor-mongering.

Why Bot-to-Bot Gossip Becomes "Feral"

Bot-to-bot gossip differs critically from both human gossip and bot-to-user gossip because it lacks social constraints. When humans gossip, norms impose limits. Even the juiciest gossip must remain plausible, or hearers will reject it and question the gossiper's credibility.

AI systems lack these guardrails. As gossip moves from one chatbot to another through training data, there's no mechanism checking whether claims have become too exaggerated or implausible. The information can continue to be embellished and intensified in a feedback loop.

This feral quality can spread in the background without anyone noticing until it surfaces in responses, and more silently than either human gossip or bot-to-user interactions.

Users should recognize that chatbots might be generating gossip about people in their queries, that this gossip may have spread and mutated between AI systems, and that the confident-sounding evaluations these systems offer might be baseless rumors rather than factual assessments.

Greater awareness of how AI gossip works, and particularly how it spreads and intensifies between machines, could make people more critical consumers of AI-generated information.