Fake news is on the rise in many countries, distorting public debate about pressing issues and potentially interfering in elections. Despite these dangers, attempts to combat fake news can be ineffective for a number of reasons. In this video, learn more about fake news, common pitfalls in fact-checking, and how to more effectively respond to fake news with Professor D.J. Flynn.
D.J. Flynn is Assistant Professor of Political Science in the School of Global and Public Affairs at IE University. He is also affiliated with IE’s Center for the Governance of Change. His research focuses on misinformation, public opinion, and survey and experimental methodology. He currently teaches courses on public opinion, political communication, and quantitative methods at IE University.
So fake news is a problem that lots of countries around the world are talking about and trying to deal with through regulation and oversight and lots of other approaches but before we have an intelligent conversation about what we’re going to do in response to fake news, it’s really important that we all are on the same page about exactly what fake news is.
We typically define fake news as knowingly fabricating stories and passing them off as legitimate stories.
Now stories like these are of growing concern in countries really across the world right now.
In these countries are thinking about how to deal with this problem.
But before I talk about solutions I want to talk about three common challenges that journalists and governments and other stakeholders have to grapple with when they’re addressing fake news and misinformation.
So the first challenge is this general human tendency that we have called motivated reasoning and motivated reasoning refers to people’s tendency to scrutinize any information that goes against what they’re motivated to believe.
So if you’re presented with some kind of factual information that is inconsistent with what you already believe, and something that you’re strongly motivated to believe, you really heavily scrutinize that information in a way that you wouldn’t if it supported your existing attitudes and your existing beliefs and the result of this, in certain circumstances, where people are really strongly motivated to believe things that aren’t true, is this problem called the backfire effect.
So the backfire effect occurs when you present someone with factual information that’s meant to push them in a particular direction but they engage in this process of motivated reasoning to such an extreme degree that the information actually pushes them in the wrong direction. So the canonical example of this was done in a series of experiments in 2010 in which subjects were presented with factual information about the United States’ failure to find weapons of mass destruction in Iraq after the invasion.
And the goal of that information was to convince people that in fact weapons of mass destruction were never found, but among a small subset of respondents of highly conservative, highly Republican respondents, that were strongly motivated to believe that weapons of mass destruction had been found, the pattern was really interesting.
So among this subset of respondents they counter-argued the information so much and they came up with alternate justifications for the invasion of Iraq, that they actually got pushed in the wrong direction and at the conclusion of the experiment they were even more confident that weapons had been found.
So here’s a case where we tried to push people in one direction but because they engaged in motivated reasoning they got pushed in another direction.
This is called the backfire problem.
A second problem that we often come across has to do with how we phrase corrective information when we’re giving it to people.
So if we want to correct something that’s not true, there are a couple of different semantic approaches we can take to doing that.
We can phrase corrective information in a positive way.
So we can say “vaccines are safe.”
So we can frame things positively or we can use a more negative frame, something that’s called a negation and we can say “vaccines are not dangerous” or “vaccines do not cause autism” or “vaccines do not cause mental illness” for example.
Now this might seem like a kind of finicky distinction but it turns out to be really important because psychologically as humans we’re much better at processing affirmations than negations. So when we phrase corrective information as negations, as time goes on people tend to forget the truth value of that statement and they tend to conflate the two concepts that were contained in the negation so to make this a little bit more concrete.
So in the context of the vaccine example, if I tell someone today that “vaccines do not cause autism,” what’s likely to happen is unless they’re strongly motivated to come up with some alternate reason for not believing that, they’re gonna take they’re gonna accept the information initially and they’re gonna change their beliefs about the safety of vaccines.
But what might happen in a week, two weeks, a month, a few months down the road, their memory, for the truth value of that statement, fades and the only thing that’s left in their head is this connection between vaccines and autism, and so what we’ve done attempting to correct the misperception is we’ve actually reinforced it and we’ve linked these two things in people’s minds.
The third and final challenge I’ll talk about has to do with people’s level of cynicism about all the factual claims that happen in politics and kind of politics more generally.
So if you’re living in a country or a subnational unit where there’s a lot of misinformation and a lot of fact-checking going on, what can often happen is people become really inherently skeptical about any factual claim in politics.
Now while that might sound good to be skeptical and to think critically about claims this can have a negative effect and that negative effect is that people become less and less confident in everything that they know, whether it’s true or false.
And so if we constantly are correcting misinformation, constantly telling people that things are not true, what can end up happening is something like a spillover problem, where people know things that are true, but we’ve depressed them and we’ve made them so cynical to such a degree that they become less and less confident in things that they know that are actually true.
So these are three of the most common, certainly not an exhaustive list, but three of the most common challenges that people face when trying to correct misinformation.
Now, the good news is there are lots of institutions that are working on trying to identify effective ways to correct misinformation online and elsewhere.
Probably the most notable development in recent years has been this huge upsurge in professional fact-checking organizations that now exist in dozens and dozens of countries across the world.
And these are independent organizations who are highly professionalized and who follow a code of conduct and are very transparent about their methods and their entire job is to adjudicate factual disputes between political parties, candidates, things that are circulating online, really anything and so that’s a really positive development in the last couple years.
Now here at IE a lot of my research looks at how effective fact-checking and other forms of correcting misinformation are.
So I do things like survey and field experiments, traditional surveys, where I present people with different formats of information, different types of corrective messages and I try to use experiments and other tools to understand which types of approaches are most effective to combating misinformation.
Now misinformation and fake news are not going away, this is a challenge that as technology proliferates and evolves and we have more and more ways of spreading information quickly and widely, this is a problem that’s going to be with us for a long time and hopefully my research here at IE contributes to helping us understand how we can address this problem.