Wiki Bots That Feud for Years Highlight the Troubled Future of AI
The behavior of bots is often unpredictable and sometimes leads them to produce errors over and over again in a potentially infinite feedback loop.
Heads up, humans: Automated software bots patrolling Wikipedia regularly engage in fights that can last for years, according to new research.
The autonomous software agents are tasked with correcting spelling, maintaining links, or even undoing digital vandalism on web pages. But when two bots find themselves with conflicting missions, they can glitch out into patterns that disrupt service.
The behavior is significant, said researchers from the University of Oxford and the Alan Turning Institute, because it suggests that even the most basic kinds of automated programs and artificial intelligence agents can display unpredictable behavior when interacting with one another.
The researchers tracked the behavior of Wikipedia's autonomous edit bots between 2001-2010 on 13 different language editions of the popular online encyclopedia. They found that the edit bots' behavior was often unpredictable as they virtually crossed paths while doing their jobs. For instance, two edit bots programmed to make conflicting changes to a webpage would circle back and make edits over and over, each undoing the other's work in a potentially infinite loop.
"We find that, although Wikipedia bots are intended to support the encyclopedia, they often undo each other's edits and these sterile 'fights' may sometimes continue for years," the researchers wrote.
Semi-autonomous AI programs like Wikipedia's edit bots are increasingly entrusted to handle the dissemination of online information, said co-author Taha Yasseri of the Oxford Internet Institute. But the inefficiencies of bot behavior could have more far-reaching impacts.
"Our ever-increasing cohabitation with bots online could have serious consequences," Yasseri remarked. "These consequences could range from trust, privacy, and legal issues to more urgent issues such as the spread of fake news or malevolent political campaigns - issues that are severe threats to democracy."
The study gives support to a phenomenon caught last year on a viral video which live-streamed two Google Home chatbots arguing in an seemingly endless loop. The interaction between the two bots is an example of the same underlying problem with Wikipedia's bots.
"In the case of the chatbots, one might argue that the complexity of the underlying AI leads to unpredictability of behavior," Yasseri explained. "That's why we have chosen Wikipedia bots as the case for our study. Wikipedia bots are among the simplest Internet bots, yet we see their social life becomes non-trivial when employed in large numbers and in different environments by different users. When this happens, one could conclude that some level of chaos in the more sophisticated bots or AI systems is unavoidable."
The researchers also found that software bots appear to behave differently in culturally distinct online environments. For instance, the German edition of Wikipedia had the fewest conflict between bots, while the English and Portuguese editions showed much higher rates of conflict.
This suggests that bots are socialized by their environments - just like people, Yasseri said.
"Bots do not operate in an abstract medium," he said. "They function within our societies. The differences between the environments they work in - and the humans they interact with - lead to differences in their overall behavior. It is naive to assume that bot ecosystems are fully deterministic and free of social characteristics."
WATCH: Why You Shouldn't Fear AI