Finding Common Ground With the Help of an Artificially Intelligent Debater
MIT's Collective Debate asks users to argue with an AI bot designed to moderate their views.
In these anxious, early years of the 21st century, artificial intelligence is being actively deployed to address a wide range of issues and challenges: automated vehicles, quantum physics, poker tournaments.
Researchers at the Massachusetts Institute of Technology are hoping AI can help untangle another complicated knot: America's polarized political landscape.
The Collective Debate project, developed at the MIT Media Lab, invites users to debate an artificial intelligence agent on a potentially divisive question. The idea is to encourage users to become more moderate by presenting opposing facts and figures on the specific issue. If you identify as liberal, the AI agent will argue a conservative position — and vice-versa.
Whichever way you lean, left or right, the AI leans back. It's not just an argument generator, though. The Collective Debate AI actually “listens” to your contention, makes educated guesses as to your point of view, then generates counterarguments deemed the most likely to nudge you toward a more moderate position.
“I conceived Collective Debate as a system that would collect opinions on a controversial issue from anyone in the world and algorithmically organize them so that people could see what are the most common arguments for or against a position,” said Ann Yuan, a research assistant at MIT Media Lab and project designer.
All are invited to participate in the project, and you can jump right into the deep end of the rhetorical lake right now by clicking over to the Collective Debate project page. Yuan says that more than 8,000 people have filled out the initial questionnaire, and 4,500 have actually completed a full debate.
The system begins with a “moral matrix” questionnaire, which measures how a user identifies with five moral foundations: harm, fairness, purity, authority, and ingroup. The questionnaire helps establish where the participant lands on a three-dimensional data map of “liberal” or “conservative” values.
Next, the user is asked to indicate whether he or she agrees with the statement: “In computer science, differences in professional outcomes between men and women are primarily the result of socialization and bias.”
Using an interactive pointer, users can indicate how strongly they agree or disagree, and how confident they are in their position.
From here, the user and the AI agent exchange arguments from pre-selected lists of statements. With each selection, the AI adjusts its counterargument to present facts and figures it deems most persuasive, based on the user’s previous responses. In other words, the AI “listens” to your point of view, then asks you to return the favor — all in the spirit of civil discourse.
Yuan chose to use the statement about bias in the computer sciences after reading about the so-called Google memo, which led to the firing of Google software engineer James Damore for criticizing the company's diversity policies.
“Damore argued that Google’s policies reflected a belief that the lack of female software engineers is attributable to socialization and bias alone, whereas Damore believes that differences in natural aptitude or interests also play a role,” Yuan told Seeker.
“I chose to base Collective Debate around this issue because it's political in an interesting way: In general, liberals disagree with Damore, while conservatives agree with him,” she said. “But if you look more closely the dividing line isn't so clear. People seem also to be divided according to their scientific leanings, moral outlooks, etc.”
Yuan also believed the issue was ripe for debate because many people had strong feelings about the controversy.
“Also, there were a lot of high-quality op-eds written on the issue from which I could mine arguments for and against Damore’s position,” she said.
After completing the AI debate — and presumably taking in all the relevant counterarguments — users are asked a second time to indicate their position on the initial question. Yuan's hypothesis was that participants would change their their original assessment after debating the issue. Ideally, participants would move toward a more moderate position, since that's what the AI agent is expressly programmed to encourage.
“The system is artificially intelligent in that it attempts to optimize for a certain outcome based on data,” Yuan said. “Specifically, the system tries to get users to either change their minds or become more moderate, and tries to prevent users from becoming more extreme. It does so by observing past users and building predictive models of how users will behave.”
But people can be weird.
“The punchline is that after the debate people tended to move toward the middle and toward the extremes,” Yuan said. “About 15 percent of users either changed their minds completely or moved towards the middle. But about 12 percent of users moved towards the extremes — they started out only moderately agreeing with the claim but ended up strongly agreeing with it.”
Yuan says the 12 percent figure is likely a demonstration of the “backfire effect,” a concept in sociology that says people become more righteous about their opinions when confronted with arguments that challenge them.
Still, the Collective Debate project succeeds as a proof-of-concept demonstration that AI can potentially help bridge our deeply divided discourse — especially online. Yuan hopes that, eventually, the technology could have practical applications in law and conflict resolution.
“My goal in building this project was not necessarily to try to change anyone’s mind on an issue, but rather to try to help people see value in the other side’s position,” Yuan said. “The hope is that we could use this understanding to build technologies that enable more productive political discourse by telling people exactly what they need to hear in order to see the other side.”