Humans — Not Bots — Are Largely Responsible for the Proliferation of Fake News
MIT researchers found that fake news was shared on Twitter faster and distributed farther than true stories — and humans are to blame.
After Hurricane Harvey slammed into Texas last August, a photo of a shark swimming up a flooded freeway in Houston went viral on Twitter. Shocking, yes. But real? No. Humans can be gullible, and Twitter gives us a platform to quickly fling our B.S. far and wide.
Until now, few studies have examined why this happens or evaluated how fakes news spreads across Twitter compared to true stories. A team of MIT researchers report on a large-scale investigation they conducted of about 126,000 news stories tweeted by 3 million people over 4.5 million times between 2006 and 2017. The researchers found that false stories propagated faster and farther than true stories and that people characterized fiction as more novel than the truth. The study is published today in the journal Science.
You may want to blame the virality of lies on the bots, those tiny software programs disguised as people. But when the researchers took bots out of the equation, they found that humans were still 70 percent more likely to retweet fabrications over facts. The results may compel us to hang our heads in shame. But this kind of study could arm Twitter and other social media sites with the tools to build in warnings that help save us from ourselves.
We mainly need to be rescued from political hogwash.
When Soroush Vosoughi, a postdoctoral student in MIT’s Media Lab, and his colleagues categorized the tweets, they found that politics sparked the largest rumor chains. (The other categories included urban legends, business, terrorism, science, entertainment, and natural disasters.) The team named those chains of tweets and retweets “cascades,” and each cascade had as few as one tweet and as many as 19.
Of the 126,000 news stories, those centered on politics saw about 45,000 cascades. For comparison, urban legends had 30,000. Political cascades based on false rumors — all verified by six independent fact-checking organizations, like Snopes and Politifact — reached more than 20,000 people and showed up in feeds three time faster than it took any of the other categories to propagate to a mere 10,000 people. The researchers saw spikes in false stories around politics during the 2012 and 2016 US presidential elections, as well.
Overall, fake news stories prompted cascades that went farther, faster, deeper, and more broadly across Twitter than any of those rooted in truth. Truthful cascades never branched out beyond more than 10 retweets, whereas falsehoods often exceeded more than 10 retweets, reaching a depth of 19 faster than any truth-based cascade of ever made.
Not surprisingly, the number of people touched by false news far exceeded the number who read true stories. In fact, truthful cascades rarely made it out to more than 1,000 users. On the other hand, the top 1 percent of cascading fabrications regularly spread to between 1,000 and 100,000 people.
To see whether bots were at fault for the promulgation of nonsense, Vosoughi and his colleagues used a bot-detection algorithm to identify and remove all bots before they ran duplicate analyses. The results were the same.
“I had a hunch that there was a difference between how false and truthful news stories spread across Twitter. But I didn’t expect that there would be a clear difference and that false news would outperform true news across all different metrics we looked at,” Vosoughi told Seeker.
The researchers guessed that fake news might be more novel and therefore more enticing. They randomly selected about 5,000 users who had disseminated both true and false rumors and analyzed about 25,000 tweets that those users had seen no more than 60 days before they decided to retweet a story.
A computer program compared the language in the retweets to the language in the tweets seen within 60 days and produced a score that determined how novel the retweet was from the other content. Tweets of false stories came way out on top, indicating they were more novel than the truth.
Next, the researchers drilled into the replies to tweets, analyzing the emotional content based on some 32,000 hashtags and phrases and their associations with anger, fear, anticipation, trust, surprise, sadness, joy, and disgust. People who replied to false rumors more often expressed surprise and disgust. People who replied to rumors that were true expressed sadness, anticipation, joy, and trust.
In short, fake news that’s surprising or disgusting is seen as more novel than true stories and that could explain why it gets retweeted more often.
“This study shows what’s going on. The next logical step is to ask, ‘What can we do about it?’” said Vosoughi.
That will involve studying user responses to flags or alerts applied to information. Vosoughi remains skeptical, though, and said that he thinks these interventions will influence only a small number of people.
WATCH: Your Gullible Brain And The Spread Of Fake News