ENS, Salle Ribot (29 rue d’ulm) and on Zoom (contact Sacha Altay for the link)
Venue : ENS, Salle Ribot (29 rue d’ulm) and on Zoom (contact Sacha Altay for the link)
Americans are more worried about misinformation than about sexism, racism, terrorism, and climate change. Fears over misinformation on social media are overblown. Misinformation represents a minute proportion of the news that people consume online ( 1%), and a small minority of people account for most of the misinformation consumed and shared online. People, on average, are good at detecting fake news and identifying reliable sources of information. People do not believe everything they see and read on the internet. Instead, they are active consumers of information who domesticate technologies in unexcepted ways. It’s very unlikely that social media exacerbates the misinformation problem, that fake news contributes to important political events or that falsehoods spread faster than the truth. Yet, some fake news stories do go viral, and understanding why, despite their inaccuracy, they go viral is important.
In a series of experiments, we identified a factor that, alongside accuracy, drives the sharing of true and fake news : the ‘interestingness-if-true’ of a piece of news, e.g. if alcohol was a cure against COVID-19, the pandemic would end in an unprecedented international booze-up. In three experiments (N = 904), participants were more willing to share news they found more interesting-if-true, as well as news they deemed more accurate. They rated fake news less accurate but more interesting-if-true than true news. People may not share news of questionable accuracy by mistake, but instead because the news has qualities that compensate for its potential inaccuracy, such as being interesting-if-true.
Despite these qualities, why are most people are reluctant to share fake news ? To benefit from communication, receivers should trust less people sharing fake news. And the costs of sharing fake news should be higher than the reputational benefits of sharing true news. Otherwise we would end up trusting people misleading us half of the time. Four experiments (N = 3,656) support this hypothesis : sharing fake news hurts one’s reputation in a way that is difficult to fix, even for politically congruent fake news. Most participants asked to be paid to share fake news (even when politically congruent), and asked for more when their reputation was at stake.
During the second part of my PhD, I tested solutions to inform people efficiently. I found that discussing in small groups the scientific evidence on Genetically Modified (GM) food safety and the usefulness of vaccines changed people’s minds in the direction of the scientific consensus.
To scale up the power of discussion, we created a chatbot that emulated the most important traits of discussion. We found that rebutting the most common counterarguments against GMOs with a chatbot led to more positive attitudes towards GMOs than a non-persuasive control text and a paragraph highlighting the scientific consensus. However, the dialogical structure of the chatbot seemed to have mattered more than its interactivity.
During the pandemic, we deployed a chatbot to inform the French population about COVID- 19 vaccines. Interacting a few minutes with this chatbot, which answered the most common questions about COVID-19 vaccines, increased people’s intention to get vaccinated and had a positive impact on their attitudes towards the vaccines.
In the end, people are not stupid. When provided with good arguments, they change their mind in the direction of good arguments. Most people avoid sharing misinformation because they care about their reputation. We do not live in a post-truth society in which people disregard the truth. Overall, we should probably be more concerned about the large portion of people who do not trust reliable sources and are uninformed because they do not follow the news, rather than the minority of people who trust unreliable sources and are misinformed