It) Check for updates Article new media & society Why do so few people share fake news? It hurts their reputation 1-22 © The Author(s) 2020 Article reuse guidelines: sagepub.com/journals-permissions DOI: 10.1 177/1461444820969893 journals.sagepub.com/home/nms (DSAGE Sacha Altay , Anne-Sophie Hacquin and Hugo Mercier Institut Jean Nicod, Departement d'etudes cognitives, ENS, EHESS, PSL University, CNRS, France Abstract In spite of the attractiveness of fake news stories, most people are reluctant to share them. Why? Four pre-registered experiments (N = 3,656) suggest that sharing fake news hurt one's reputation in a way that is difficult to fix, even for politically congruent fake news. The decrease in trust a source (media outlet or individual) suffers when sharing one fake news story against a background of real news is larger than the increase in trust a source enjoys when sharing one real news story against a background of fake news. A comparison with real-world media outlets showed that only sources sharing no fake news at all had similar trust ratings to mainstream media. Finally, we found that the majority of people declare they would have to be paid to share fake news, even when the news is politically congruent, and more so when their reputation is at stake. Keywords Communication, fake news, misinformation, political bias, reputation, social media, source, trust Recent research suggests that we live in a "post-truth" era (Lewandowsky et al., 2017; Peters, 2018), when ideology trumps facts (Van Bavel and Pereira, 2018), social media are infected by fake news (Del Vicario et al., 2016), and lies spread faster than (some) truths (Vosoughi et al., 2018). We might even come to believe in fake news—understood as "fabricated information that mimics news media content in form but not in organizational Corresponding author: Hugo Mercier, Departement d'etudes cognitives, ENS, EHESS, PSL University, Institut Jean Nicod, CNRS, 29 rue d'Ulm, Paris 75005, France. Email: hugo.mercier@gmail.com 2 new media & society 00(0) process or intent" (Lazer et al., 2018, p. 1094; see also Tandoc et al., 2018a)—for reasons as superficial as having been repeatedly exposed to them (Balmas, 2014). In fact, despite the popularity of the "post-truth" narrative (Lewandowsky et al., 2017; Peters, 2018), an interesting paradox emerges from the scientific literature on fake news: in spite of its cognitive salience and attractiveness (Acerbi, 2019), fake news is shared by only a small minority of Internet users (Grinberg et al., 2019; Guess et al., 2019; Nelson and Taneja, 2018; Osmundsen et al., 2020). In the present article, we suggest and test an explanation for this paradox: sharing fake news hurts the epistemic reputation of its source and reduces the attention the source will receive in the future, even when the fake news supports the audience's political stance. Fake news created with the intention of generating engagement is not constrained by reality. This freedom allows fake news to tap into the natural biases of the human mind such as our tendency to pay attention to information related to threats, sex, disgust, or socially salient individuals (Acerbi, 2019; Blaine and Boyer, 2018; Vosoughi et al., 2018) . For example, in 2017, the most shared fake news on Facebook was entitled "Babysitter transported to hospital after inserting a baby in her vagina" (BuzzFeed, 2017). In 2018, it was "Lottery winner arrested for dumping $200,000 of manure on ex-boss' lawn" (BuzzFeed, 2018). Despite the cognitive appeal of fake news, ordinary citizens, who overwhelmingly value accuracy (e.g. Knight Foundation, 2018; The Media Insight Project, 2016), and who believe fake news represents a serious threat (Mitchell et al., 2019), are "becoming more epistemically responsible consumers of digital information" (Chambers, 2020: 1). In Europe, less than 4% of the news circulating on Twitter in April 2019 was fake (Marchal et al., 2019), and fake news represent only 0.15% of 'Americans' daily media diet (Allen et al., 2020). During the 2016 presidential election in the United States, on Twitter 0.1% of users were responsible of 80% of the fake news shared (Grinberg et al., 2019) . On Facebook, the pattern is similar: only 10% of users shared any fake news during the 2016 US presidential election (Guess et al., 2019). If few people share fake news, media outlets sharing fake news are also relatively rare and highly specialized. Mainstream media only rarely share fake news (at least intentionally, for example, Quand et al., 2020; see also the notion of press accountability: Painter and Hodges, 2010) while sharing fake news is common for some hyper-partisan and specialized outlets (Guo and Vargo, 2018; Pennycook and Rand, 2019a). We hypothesize that one reason why the majority of people and media sources avoid sharing fake news, in spite of its attractiveness, is that they want to maintain a good epistemic reputation, in order to enjoy the social benefits associated with being seen as a good source of information (see, for example, Altay et al., 2020; Altay and Mercier, 2020). For example, evidence suggests that Internet users share news from credible sources to enhance their own credibility (Lee and Ma, 2012). In addition, qualitative data suggest that one of people's main motivation to verify the accuracy of a piece of news before sharing it is: protecting their positive self-image as they understand the detrimental impacts of sharing fake news on their reputation. [...] Avoiding these adverse effects of sharing fake news is a powerful motivation to scrutinize the authenticity of any news they wish to share. (Waruwu et al., 2020: 7) Altay et al. 3 To maintain a good epistemic reputation people and media outlets must avoid sharing fake news because their audience keeps track of how accurate the news they share have been in the past. Experiments have shown that accuracy plays a large role in source evaluation: inaccurate sources quickly become less trusted than accurate source (even by children, for example, Corriveau and Harris, 2009), people are less likely to follow the advice of a previously inaccurate source (Fischer and Harvey, 1999), content shared by inaccurate sources is deemed less plausible (e.g. Collins et al., 2018), and, by contrast, being seen as a good source of information leads to being perceived as more competent (see, for example, Altay et al., 2020; Altay and Mercier, 2020; Boyer and Parren, 2015). In addition, sources sharing political falsehoods are condemned even when these falsehoods support the views of those who judge the sources (Effron, 2018). Epistemic reputation is not restricted to individuals, as media outlets also have an epistemic reputation to defend: 89% of Americans believe it is "very important" for a news outlet to be accurate, 86% that it is "very important" that they correct their mistakes (Knight Foundation, 2018), and 85% say that accuracy is a critical reason why they trust a news source (The Media Insight Project, 2016). Accordingly, 63% of Americans say they have stopped getting news from an outlet in response to fake news (Pew Research Center, 2019a), and 50% say they avoided someone because they thought they would bring up fake news in conversation (Pew Research Center, 2019a). Americans and Europeans are also able to evaluate media outlets' reliability: their evaluations, in the aggregate, closely match those of professional fact-checkers or media experts (Pennycook and Rand, 2019a; Schulz et al., 2020). As a result, people consume less news from untrustworthy websites (Allen et al., 2020; Guess et al., 2020) and engage more with articles shared by trusted figures and trusted media outlets on social media (Sterrett et al., 2019). However, for the reputational costs of sharing a few fake news stories to explain why so few sources share fake news, there should be a trust asymmetry: epistemic reputation must be lost more easily than it is gained. Otherwise sources could get away with sharing a substantial amount of fake news stories if they compensated by sharing real news stories to regain some trust. Experimental evidence suggests that trust takes time to build but can collapse quickly, in what Slovic (1993: 677) calls "the asymmetry principle." For example, the reputation of an inaccurate advisor will be discounted more than the reputation of an accurate advisor will be credited (Skowronski and Carlston, 1989). In general, the reputational costs associated with being wrong are higher than the reputational benefits of being right (Yaniv and Kleinberger, 2000). A single mistake can ruin someone's reputation of trustworthiness, while a lot of positive evidence is required to change the reputation of someone seen as untrustworthy (Rothbart and Park, 1986). For the trust asymmetry to apply to the sharing of real and fake news, participants must be able to deem the former more plausible than the latter. Some evidence suggests that US participants are able to discriminate between real and fake news in this manner (Altay et al., 2020; Bago et al., 2020; Pennycook and Rand, 2019b; Pennycook et al., 2019,2020). Prior to our experiments, we ran a pre-test to ensure that our set of news had the desired properties in term of perceived plausibility (fake or real) and political orientation (pro-Democrats or pro-Republicans) (see Section 2 of the Electronic Supplemental 4 new media & society 00(0) Material [ESM]). To the extent that people find fake news less plausible than real news, that real news is deemed at least somewhat plausible, and that fake news is deemed implausible (as our pre-test suggests is true for our stimuli) trust asymmetry leads to the following hypothesis: Hj: A good reputation is more easily lost than gained—the negative effect on trust of sharing one fake news story, against a background of real news stories, should be larger than the positive effect on trust of sharing one real news story, against a background of fake news stories. If the same conditions hold for politically congruent news, trust asymmetry leads to the following hypothesis: H2: A good reputation is more easily lost than gained, even if the fake news is politically congruent—the negative effect on trust of sharing one fake news story, against a background of real news stories, should be larger than the positive effect on trust of sharing one real news story, against a background of fake news stories, even if the news stories are all politically congruent with the participant's political stance. We also predicted that, in comparison with real world media outlets, sources in our experiments sharing only fake news stories should have trust ratings similar to junk media (such as Breitbart), and have trust ratings different from mainstream media (such as the New York Times). By contrast, sources sharing only real news stories should have trust ratings similar to mainstream media, and different from junk media. If Hj and H2 are true, and if people inflict severe reputational damage to sources of fake news, the prospect of suffering from these reputational damages, combined with a natural concern about one's reputation, should make sharing fake news costly. Participants should be more reluctant to share fake news when their reputation is at stake than when it isn't. To measure participants' reluctance to share fake news we asked them how much they would have to be paid to share various fake news stories (for a similar method see: Graham and Haidt, 2012; Graham et al., 2009). These considerations lead to the following hypotheses: H3: Sharing fake news should be costly: the majority of people should ask to be paid a non-null amount of money to share a fake news story on their own social media account. H4: Sharing fake news should be costlier when one's reputation is at stake—people should ask to be paid more money for sharing a piece of fake news when it is shared by their own social media account, compared to when it is not shared by them. If H2 is true, the reputational costs inflicted to fake news sharers should also be exerted on those who share politically congruent fake news, leading to: H5: Sharing fake news should appear costly for most people, even when the fake news stories are politically congruent: the majority of people will be asked to be paid a Altay et al. 5 non-null amount of money to share a politically congruent fake news story on their own social media account. H6: Sharing fake news should appear costlier when reputation is on the line, even when the fake news stories are politically congruent—people should ask to be paid more money for a piece of politically congruent fake news when it is shared on their own social media account, compared to when it is shared by someone else. If H3 6 are true, sharing fake news should also appear costlier than sharing real news: H7. Sharing fake news should be costlier than sharing real news when one s reputation is at stake—people should ask to be paid more money for sharing a piece of news on their own social media account when the piece of news is fake compared to when it is real. We conducted four experiments to test these hypotheses (Experiment 1 tests H15 Experiment 2 tests H2, Experiment 3 tests H3 6, Experiments 4 tests H3 47). Based on preregistered power analyses, we recruited a total of 3656 online participants from the United States. We also preregistered our hypotheses, primary analyses, and exclusion criterion (based on two attention check and geolocation for Experiments 1 and 2, and one attention check for Experiments 3 and 4). All the results supporting the hypotheses presented in this manuscript hold when no participants are excluded (see section 9 of ESM). Preregistrations, data, materials, and the scripts used to analyze the data are available on the Open Science Framework at https://osf.io/cxrgq/. Experiment I The goal of the first experiment was to measure how easily a good reputation could be lost, compared to the difficulty of acquiring a good reputation. We compared the difference between the trust granted to a source sharing one fake news story, after having shared three real news stories, with the trust granted to a source sharing one real news story, after having shared three fake news stories. We predicted that the negative effect on trust of sharing one fake news story, after having shared real news stories, would be larger than the positive effect on trust of sharing one real news story, after having shared fake news stories (Hj). Participants Based on a pre-registered power analysis, we recruited 1,113 US participants on Amazon Mechanical Turk, paid $0.30. We removed 73 participants who failed at least one of the two post-treatment attention checks (see Section 2 of the ESM), leaving 1,040 participants (510 men, 681 democrats, MAge=39.09, SD= 12.32). Design and procedure After having completed a consent form, in a between-subjects design, participants were presented with one of the following conditions: three real news stories, three fake news 6 new media & society 00(0) stories, three real news stories and one fake news story, three fake news stories and one real news story. The news stories that participants were exposed to were randomly selected from the initial set of eight neutral news stories. Presentation order of the news stories was randomized, but the news story with a different truth-status was always presented at the end. Half of the participants were told that the news stories came from one of the two following made up outlets: "CSS.co.uk" or "MBI news." The other half were told that the news stories had been shared on Facebook by one of two acquaintance: "Charlie" or "Skyler." After having read the news stories, participants were asked the following question: "how reliable do you think [insert source name] is as a source of information," on a seven-point Likert-type scale ranging from "Not reliable at all" (1) to "Extremely reliable" (7), with the central measure being "Somewhat reliable" (4). Even though using one question to measure trust in information sources has proven reliable in the past (Pennycook and Rand, 2019a), participants were also asked a related question: "How likely would you be to visit this website in the future?" (for outlets) or "How likely would you be to pay attention to what [insert a source name] will post in the future?" (for individuals) on a seven-point Likert-type scale ranging from "Not likely at all" (1) to "Very likely" (7), with the central measure being "Somewhat likely" (4). Before finishing the experiment, participants were presented with a correction of the fake news stories they might have read during the experiment, with a link to a fact-checking article. Fact-checking reliably corrects political misinformation and backfires only in rare cases (see, Walter et al., 2019). The ideological position of the participants was measured in the demographics section with the following question: "If you absolutely had to choose between only the Democratic and Republican party, which would do you prefer?" Polls have shown that 81% of Americans who consider themselves independent fall into the Democratic-Republican axis (Pew Research Center, 2019b), and that this dichotomous scale yields results similar to those of more fine-grained scales (Pennycook and Rand, 2019a, 2019b). Materials We pre-tested our materials with 288 US online participants on Amazon Mechanical Turk to select two news sources (among the 10 pre-tested) whose novel names would evoke trust ratings situated between those of mainstream sources and junk media (Pennycook and Rand, 2019a). We also selected 24 news stories (among the 45 pretested) from online news media and fact-checking websites that were either real or fake and whose political orientation was either in favor of Republicans, in favor of Democrats, or politically neutral (neither in favor of Republicans nor Democrats; all news stories are available in Section 1 of the ESM). The full results of the pre-test are available in Section 2 of the ESM, but the main elements are as follows. For the stories we retained, the fake news stories were considered less accurate (M=2.35, SD= 1.66) than the real news stories (M= 4.16, SD = 1.56), t(662) = 14.52, p < .001, d = 1.26. Politically neutral news stories' political orientation (M=3.96, 5D = 0.91) did not significantly differ from the middle of the scale (4), t(222) = .73,p = .46. News stories in favor of Democrats (M=2.56, SD= 1.82) significantly differed in political orientation from politically neutral news, in Altay et al. 7 An Ohio woman was charged by the Centerville Police Department with siowly eating her husband alive over a period of three years Charlie shared a post: Gray Wolves May Lose Endangered Status and Protections A 62-year old Cenrerville, Ohio woman has been arrested on charges of mutilation and Once again, federal wildlife officials say their numbers have rebounded. But assault for a banal crime involving her 65-year old husband. conservationists may go back to court to fight the move. css.co-iik- Figure I. Example of a politically neutral fake news story shared by "MBI news" on the left, and a politically neutral real news story shared by "Charlie," as they were presented to the participants. the expected direction (M=3.96, SD = .91), f(340) = 10.37,/? < .001, d = .97. News stories in favor of Republicans (M=5.58, 5D=1.76) significantly differed in political orientation from politically neutral news stories, in the expected direction (M=3.96, 5D = .91), ?(313) = 11.94,/?< .001, d = 1.15. Figure 1 provides an example of the stories presented to the participants. Results and discussion All statistical analyses were conducted in R (v.3.6.0), using R Studio (vl.1.419). We use parametric tests throughout because we had normal distributions of the residuals and did not violate statistical assumptions (switching to non-parametric tests would have reduce our statistical power). The ?-tests reported in Experiments 1 and 2 are Welch's t-test. Post hoc analyses for the main analyses presented below can be found in Section 6 of the ESM. The correlation between our two measures of trust (the estimated reliability and the willingness to interact with the source in the future) was 0.77, Pearson's product-moment correlation ^(1,038) = 38.34, /? < .001. Since these two measures yielded similar results, in order to have a more robust measure of the epistemic reputation of the source we combined them into a measure called "Trust." This measure will be used for the following analyses. The pre-registered analyses conducted separately on the estimated reliability and the willingness to interact with the source in the future can be found in Section 4 of the ESM. In Experiments 1 and 2, since the slopes that we compare initially do not have the same sign (e.g. 0.98 and -0.30 in Experiment 1), we changed the sign of one slope to compare the absolute values of the slopes (i.e. 0.98 and 0.30). Without this manipulation, the interactions would not inform the trust asymmetry hypothesis (e.g. if the slopes had 8 new media & society 00(0) Politically Neutral News Politically Congruent News Politically Incongruent News Extremely reliable/ likely Not reliable/likely at all • 3 Real News ± 3 Real News &1 Fake News 3 Fake News 4 3 Fake News & 1 Real News Figure 2. Interaction plot for the trust attributed to sources sharing politically neutral, congruent, and incongruent news. This figure represents the effect on trust (i.e. reliability rating and willingness to interact in the future) of the number of news stories presented (three or four), and the nature of the majority of the news stories (real or fake). The left panel: Experiment I; middle and right panels: Experiment 2. the following values "0.98 and -0.98" there would be no asymmetry, but the interaction would be statistically significant). Confirmatory analyses. As predicted by Hp whether the source is a media outlet or an acquaintance, the increase in trust that a source enjoys when sharing one real news against a background of fake news is smaller (trend = .30, SE=. 12) than the drop in trust a source suffers when sharing one fake news against a background of real news {trend=.98, SE=.\2), <1,036)=4.11,/?<.001. This effect is depicted in Figure 2 (left panel), and holds whether the source is an acquaintance, respective trends: .30, SE=.IS; .98, SE=.\1; <510)=2.79, /?=.005, or a media outlet, respective trends: .29, SE=.\6; .98, SE=.\6; <522) = 3.11,jp = .002. A good reputation is more easily lost than gained. Regardless of whether the source was an acquaintance or a media outlet, participants decreased the trust granted to sources sharing one fake news after having shared three real news more than they increased the trust granted to sources sharing one real news after having shared three fake news. Experiment 2 This second experiment is a replication of the first experiment with political news. The news were either in favor of Republicans or in favor of Democrats. Depending on the participants' own political orientation, the news were classified as either politically congruent (e.g. a Democrat exposed to a piece of news in favor of Democrats) or politically incongruent (e.g. a Democrat exposed to a piece of news in favor of Republicans). We predicted that, even when participants receive politically congruent news, we would observe the same pattern as in Experiment 1: the negative effect on trust of sharing one fake news story Altay et al. 9 Figure 3. Example of a real political news story in favor of Democrats shared by "CSS.co.uk" on the left, and a fake political news story in favor of Democrats shared by "MBI news," as they were presented to the participants. against a background of real news stories would be larger than the positive effect on trust of sharing one real news story against a background of fake news stories (H2). Participants Based on a pre-registered power analysis, we recruited 1600 participants on Amazon Mechanical Turk, paid $0.30. We removed 68 participants who failed the first post-treatment attention check (but not the second one, see Section 5 of the ESM), leaving 1532 participants (855 women, 985 democrats, =39.28, SD= 12.42). Design, procedure, and materials In a between-subjects design, participants were randomly presented with one of the following conditions: three real political news stories, three fake political news stories, three real political news stories and one fake political news story, three fake political news stories and one real political news story. The news stories were randomly selected from the initial set of 16 political news stories. Whether participants saw only news in favor of Republicans or news in favor of Democrats was also random. The design and procedure are identical to Experiment 1, except that we only used one type of source (media outlets), since the first experiment showed that the effect hold regardless of the type of source. Figure 3 provides an example of the materials used. Results The correlation between the two measures of trust (the estimated reliability and the willingness to interact with the source in the future) was 0.80, Pearson's product-moment correlation *(l,530) = 51.64,_p < .001. Since these two measures yielded similar results, as in Experiment 1, we combined them into a "Trust" measure. The pre-registered separated analyses on the estimated reliability and the willingness to interact with the source 10 new media & society 00(0) in the future can be found in Section 5 of the ESM. Post hoc analyses for the main analyses presented below can also be found in Section 6 of the ESM. Confirmatory analyses. As predicted by H2, among politically congruent news, we found that the increase in trust that a source enjoys when sharing one real news against a background of fake news is smaller (trend=AS, SE=.15) than the drop in trust a source suffers when sharing one fake news against a background of real news {trend= .95, SE= .14), ?(737)=2.31,/? = .02, (see the middle panel of Figure 2). Among politically incongruent news, we found that the increase in trust that a source enjoys when sharing one real news against a background of fake news is smaller (trend= .06, SE= .13) than the drop in trust a source suffers when sharing one fake news against a background of real news (trend=.99, SE=.U), f(787)=4.94,/? < .001, (see the right panel of Figure 2). Slopes comparison across experiments (exploratory analyses). The decrease in trust (in absolute value) that sources sharing one fake news story against a background of real news stories, compared to sources that share only real news stories, was not different for politically neutral news (trend=.98, SE=.12) and political news (politically congruent news (trend=.95, SE=.\A), ?(1,280) = .06, p = .95, politically incongruent news (trend=.99, SE=.U), ?(901) = .03,jp = .98. The increase in trust (in absolute value) that source sharing one real news story against a background of fake news stories, compared to sources that share only fake news stories, was not different between politically neutral news (trend= .30, SE=.12) and political news, politically congruent news: (trend= .48, SE=.15), ?(876) = .92, p = .36; politically incongruent news: (trend=.06, SE = .13), t(922)= 1.42, p = .l5. However, this increase was smaller for politically incongruent than congruent news, ?(731) = 2.68,jp = .008. Participants trusted less sources sharing politically incongruent news than politically congruent news, (3=-0.51, ?(2,569)=-10.22, p<.00l, and politically neutral news, (3=-0.52, ?(2,569) = -11.26,/?< .001. On the other hand, we found no significant difference in the trust granted to sources sharing politically neutral news compared to politically congruent news, (3=-0.01, ?(2,569)=-0.18, p = .S6. An equivalence test with equivalence bounds of-0.20 and 0.20 showed that the observed effect is statistically not different from zero and statistically equivalent to zero, ?(l,608.22) = -3.99,/? < .001. Comparison of the results of Experiments I and 2 with real world trust ratings (confirmatory analyses). We compared the trust ratings of the sources in Experiments 1 and 2 to the trust ratings that people gave to mainstream media outlets and junk media outlets (Penny cook and Rand, 2019a). We predicted that sources sharing only fake news stories should have trust ratings similar to junk media, and dissimilar to mainstream media, whereas sources sharing only real news stories should have trust ratings similar to mainstream media, and dissimilar to junk media. To this end, we rescaled the trust ratings from the interval [1,7] to the interval [0,1]. To ensure a better comparison with the mainstream sources sampled in studies one and two of Pennycook and Rand (2019a), which relay both political and politically neutral news, we merged the data from Experiment 1 (in which the sources shared politically Altay et al. Trust Ratings JUNK MEDIA (M = 0.17, SD = .24) MAINSTREAM MEDIA (M = 0.42, SD = .32) 3 FAKE NEWS (M = 0.15, SD = 0.23) NOT DISSIMILAR £(33-79) = 0.39, P = 70, d = .12 VERY DISSIMILAR £(30.4) = 4.67, p < .001, d = 1.21 3 FAKE NEWS + 1 REAL NEWS (M = 0.20, SD = .23) NOT DISSIMILAR £(33-95) = 0.67, p = .51, d = .07 VERY DISSIMILAR £(30.47) = 3.88, p < .001, d = 1.01 3 REAL NEWS + 1 FAKE NEWS (M = 0.29, SD = 0.24) SLIGHTLY DISSIMILAR £(34-23) = 2.84, p = .01, d = .46 MODERATELY DISSIMILAR ((30.61) = 2.26, p = .03, d = .56 3 REAL NEWS (M = 0.46, SD = .24) VERY DISSIMILAR £(34-i) = 6.68, p < .001, d = 1.16 NOT DISSIMILAR £(30.55) = 0.37, P = -71, d = .13 Figure 4. Statistical comparison of the four present conditions (three fake news, three fake news and one real news, three fake news and one real news, three real news) with the results obtained in studies one and two of Pennycook and Rand (2019a) for trust scores of mainstream media and junk media. "Very dissimilar" correspond to large effect; "Moderately dissimilar" medium effect; "Slightly similar" to small effect; "Not dissimilar" to an absence of statistical difference. neutral news) and Experiment 2 (in which the sources shared political news). Then, we compared these merged trust score with the trust scores that mainstream media and junk media received in Pennycook and Rand (2019a) (see Figure 4). As predicted, we found that sources sharing only fake news stories had trust ratings not dissimilar to junk media, and very dissimilar to mainstream media, while sources sharing only real news stories had trust ratings not dissimilar to mainstream media, and dissimilar to junk media. Sharing one real news against a background of real news was not sufficient to escape the category junk media. The only sources that received trust scores not dissimilar to those of mainstream media were sources sharing exclusively real news stories. Discussion A good reputation is more easily lost than gained, even when sharing fake news stories politically congruent with participants' political orientation. The increase in trust gained by sources sharing a real news story against a background of fake news stories was smaller than the decrease in trust suffered by sources sharing a fake news story against a background of real news stories. Moreover, this decrease in trust was not weaker for politically congruent news than for politically neutral or politically incongruent news. 12 new media & society 00(0) Participants did not differentiate between sources sharing politically neutral news and politically congruent news, but they were mistrustful of sources sharing incongruent political news. Experiment 3 Experiments 1 and 2 show that people are quick to distrust sources sharing fake news, even if they have previously shared real news, and slow to trust sources sharing real news, if they have previously shared fake news. However, by themselves, these results do not show that this is why most people appear to refrain from sharing fake news. In Experiment 3, we test more directly the hypothesis that the reputational fallout from sharing fake news motivates people not to share them. In particular, if people are aware of the reputational damage that sharing fake news can wreak, they should not willingly share such news if they are not otherwise incentivized. Some evidence from Singaporean participants already suggests that people are aware of the negative reputational fallouts associated with sharing fake news (Waruwu et al., 2020). However, no data suggest that the same is true for Americans. The political environment in the United States, in particular the high degree of affective polarization (see, for example, Iyengar et al., 2019), might make US participants more likely to share fake news in order to signal their identity or justify their ideological positions. However, we still predict that even in this environment, most people should be reluctant to share fake news. In Experiment 3, we asked participants how much they would have to be paid to share a variety of fake news stories. However, even if participants ask to be paid to share fake news, it might not be because they fear the reputational consequences—for example, they might be worried that their contacts would accept false information, wherever it comes from. To test this possibility, we manipulated whether the fake news would be shared by the participant's own social media account, or by an anonymous account, leading to the following hypotheses: H3: The majority of participants will ask to be paid to share each politically neutral fake news story on their own social media account. H4: Participants ask to be paid more money for a piece of fake news when it is shared on their own social media account, compared to when it is shared by someone else. H5: The majority of participants will ask to be paid to share each politically congruent fake news story on their own social media account. H6: Participants ask to be paid more money for a piece of politically congruent fake news when it is shared on their own social media account, compared to when it is shared by someone else. Participants Based on pre-registered power analysis, we recruited 505 participants on Prolific Academic, paid £0.20. We removed one participant who failed to complete the Altay et al. 13 post-treatment attention test (see Section 2 of the ESM), and 35 participants who reported not using social media, leaving 469 participants (258 women, MAge = 32.87, SD= 11.51). Design, procedure and materials In a between-subjects design, participants had to rate how much they would have to be paid for their contacts to see fake news stories, either shared from their own personal social media account (in the Personal Condition), or by an anonymous account (in the Anonymous Condition). We used the same set of fake news as in Experiment 1 and Experiment 2, but this time the news were presented without any source. Each participant saw 12 fake news stories in a randomized order and rated each of them. In the Personal Condition, after having read a fake news story, participants were asked the following question: "How much you would have to be paid to share this piece of news with your contacts on social media from your personal account?" on a four-point Likert-type scale "$0" (1), "$10" (2), "$100" (3), "$1000 or more" (4). We used a Likert-type scale instead of an open-ended format because in a previous version of this experiment the open-ended format generated too many outliers, making statistical analysis difficult (see Section 3 of the ESM). In the Anonymous Condition, after having read a fake news story, participants were asked the following question: "How much you would have to be paid for this piece of news to be seen by your contacts on social media, shared by an anonymous account?" on a four-point Likert-type scale "$0" (1), "$10" (2), "$100" (3), "$1000 or more" (4). Results Confirmatory analyses. In support of H3, for each politically neutral fake news, a majority of participants asked to be paid a non-null amount of money to share it (share of participants requesting at least $10 to share each piece of fake news: M=66.45%, Min=61.8%, Max=69.5%) (for a visual representation see Figure 5; for more details see Section 8 of the ESM). In support of H4, participants asked to be paid more to share politically neutral fake news stories from their personal account compared to when it was shared by an anonymous account, (3 = 0.28, t(467) = 3.73,p< .001 (see Figure 6). In support of H5, for each politically congruent fake news, a majority of participants asked to be paid a non-null amount of money to share it (share of participants requesting at least $10 to share each piece of fake news: M=64.9%, Min=59.4%, Max=71.7%) (for a visual representation see Figure 5; for more details see Section 8 of the ESM). In support of H6, participants asked to be paid more to share politically congruent fake news stories from their personal account compared to when it was shared by an anonymous account, (3 = 0.24, ?(467) = 3.24,/? = .001, (see Figure 6). Exploratory analyses. Participants asked to be paid more to share politically incongruent news than politically congruent news, (3 = 0.28, ?(5625) = 8.77,p<.00l, and politically neutral news, (3 = 0.32, ?(5,625) = 9.93, p<.00l. On the other hand, we found no 14 new media & society 00(0) Experiment 3 - Fake news 10$ 100$ +1000$ 10$ 100$ +1000$ Experiment 4 - Fake news 10$ 100$ +1000$ 10$ 100$ +1000$ Experiment 4 - Real news 10$ 100$ +1000$ Anonymous 10$ 100$ Personal +1000$ Figure 5. Bar plots representing how much participants asked to be paid to share fake news stories in the Anonymous Condition (on the left) and Personal Condition (on the right) in Experiments 3 and 4 (as well as real news stories in the latter). The red bars represent the percentage of participants saying they would share a piece of news for free, while the green bars represent the percentage of participants asking for a non-null amount of money to share a piece of news. significant difference between the amount requested to share politically congruent and neutral fake news, (3 = 0.04, £(5,625) = 1.16,/? = .25. Additional exploratory analyses and descriptive statistics are available in Section 7 of the ESM. Altay et al. 15 .0. Incongruent 0- Congruent } Neutral Anonymous Personal Figure 6. Interaction plot for the amount of money requested (raw values) in the Anonymous Condition and the Personal Condition. For each politically incongruent fake news, a majority of participants asked to be paid a non-null amount of money to share it (share of participants requesting at least $10 to share each piece of fake news: M= 70.73%, Min=60.4%, Max=77.2%) (for a visual representation see Figure 5; for more details see Section 8 of the ESM). In the Personal Condition, the 9.3% of participants who were willing to share all the pieces of fake news presented to them for free accounted for 37.4% of the $0 responses. Experiment 4 Experiment 4 is a replication of Experiment 3 with novel materials (i.e. a new set of news) and the use of real news in addition to fake news. It allows us to test the generaliz-ability of the findings of Experiment 3 (in particular H3 and H4), and to measure the amount of money participants will request to share fake news compared to real news. Thus, in addition to H3 4, Experiment 4 tests the following hypothesis: H7. People will ask to be paid more money for sharing a piece of news on their own social media account when the news is fake compared to when it is real. Participants Based on pre-registered power analysis, we recruited 150 participants on Prolific Academic, paid £0.20. We removed eight participants who reported not using social media (see Section 2 of the ESM) leaving 142 participants (94 women, MAge=30.15, SD = 9.93). 16 new media & society 00(0) Design, procedure and materials The design and procedure were similar to Experiment 3 except that participants were presented with 20 news instead of 10, and that among these news half of them were true (the other half being fake). We used novel materials because the sets of news used in Experiments 1,2 and 3 were then outdated. The new set of news is related to COVID-19 and is not overtly political. Results and discussion Confirmatory analyses. In support of H3, for each fake news, a majority of participants asked to be paid a non-null amount of money to share it (share of participants requesting at least $10 to share each piece of fake news: M=71.1%, Min=66.7%, Max=76.0%) (for a visual representation see Figure 5; for more details see Section 8 of the ESM). In support of H4, participants asked to be paid more to share fake news from the personal account than from an anonymous account, B = 0.32, £(148) = 3.41,.001. In an exploratory analysis, we found that participants did not significantly request more money to share real news from their personal account compared to an anonymous account, B = 0.18, £(140) = 1.41,/? = .16. The effect of anonymity was stronger for fake news compared to real news, interaction term: B = 0.32, £(2,996) = 6.22, p< .001. In support of H7, participants asked to be paid more to share, from their personal account fake news stories compared to real news stories, B = 0.57, £(1,424)= 18.92, /?<.001. Exploratory analyses. By contrast with fake news, for some real news, most participants accepted to share them without being paid (share of participants requesting at least $10 to share each piece of fake news: M=56.5%, Min=43.3%, Max=67.3%) (for a visual representation see Figure 5; for more details see Section 8 of the ESM). In the Personal Condition, the 14.1% of participants who were willing to share all the pieces of fake news presented to them for free accounted for 43.8% of all the $0 responses. We successfully replicated the findings of Experiment 3 on a novel set of news, offering further support for H3 and H4 and demonstrated that the perceived cost of sharing fake news is higher than the perceived costs of sharing real news. Overall, the results of Experiments 3 and 4 suggest that most people are reluctant to share fake news, even when it is politically congruent, and that this reluctance is motivated in part by a desire to prevent reputational damage, since it is stronger when the news is shared from the participant's own social media account. These results are consistent with most people's expressed commitment to share only accurate news articles on social media (Pennycook et al., 2019), their awareness that their reputation will be negatively affected if they share fake news (Waruwu et al., 2020), and with the fact that a small minority of people is responsible for the majority of fake news diffusion (Grinberg et al., 2019; Guess et al., 2019; Nelson and Taneja, 2018; Osmundsen et al., 2020). However, our results should be interpreted tentatively since they are based on participants' self-reported intentions. We encourage future studies to extend these findings by relying on actual sharing decisions by social media users. Altay et al. 17 General discussion Even though fake news can be made to be cognitively appealing, and congruent with anyone's political stance, it is only shared by a small minority of social media users, and by specialized media outlets. We suggest that so few sources share fake news because sharing fake news hurts one's reputation. In Experiments 1 and 2, we show that sharing fake news does hurt one's reputation, and that it does so in a way that cannot be easily mended by sharing real news: not only did trust in sources that had provided one fake news story against a background of real news dropped, but this drop was larger than the increase in trust yielded by sharing one real news story against a background of fake news stories (an effect that was also observed for politically congruent news stories). Moreover, sharing only one fake news story, in addition to three real news stories, is sufficient for trust ratings to become significantly lower than the average of the mainstream media. Not only is sharing fake news reputationally costly, but people appear to take these costs into account. In Experiments 3 and 4, a majority of participants declared they would have to be paid to share each of a variety of fake news story (even when the stories were politically congruent), that participants requested more money when their reputation could be affected, and that the amount of money requested was larger for fake news compared to real news. These results suggest that people's general reluctance to share fake news is in part due to reputational concerns, which dovetails well with qualitative data indicating that people are aware of the reputational costs associated with sharing fake news (Waruwu et al., 2020). In this perspective, Experiments 1 and 2 show that these fears are founded, since sharing fake news effectively hurts one's reputation in a way that appears hard to fix. Consistent with past work showing that a small minority of people shares most of the fake news (e.g. Grinberg et al., 2019; Guess et al., 2019; Nelson and Taneja, 2018; Osmundsen et al., 2020), in Experiments 3 and 4 we observed that a small minority of participants (less than 15%) requested no payment to share any of the fake news items they were presented with. These participants accounted for over a third of all the cases in which a participant requested no payment to share a piece of fake news. Why would a minority of people appear to have no compunction in sharing fake news, and why would many people occasionally share the odd fake news stories? The sharing of fake news in spite of the potential reputational fallout can likely be explained by a variety of factors, the most obvious being that people might fail to realize a pieces of news is fake: if they think the news to be real, people have no reason to suspect that their reputation would suffer from sharing it (on the contrary). Studies suggest that people are, on the whole, able to distinguish fake from real news (Altay et al., 2020; Bago et al., 2020; Pennycook et al., 2019, 2020; Pennycook and Rand, 2019b), and that they are better at doing so for politically congruent than incongruent fake news (Pennycook and Rand, 2019b). However, this ability does not always translate into a refusal to share fake news (Pennycook et al., 2019, 2020). Why would people share news they suspect to be fake? There is a number of reasons why people might share even news they recognize as fake, which we illustrate with popular fake news from 2016 to 2018 (BuzzFeed, 2016, 2017, 2018). Some fake news might be shared because they are entertaining ("Female Legislators Unveil 'Male Ejaculation Bill' Forbidding The Disposal Of Unused Semen," 18 new media & society 00(0) see Acerbi, 2019; Tandoc, 2019; Tandoc et al., 2018b; Waruwu et al., 2020), or because they serve a phatic function ("North Korea Agrees To Open Its Doors To Christianity," see Berriche and Altay, 2020; Duffy and Ling, 2020), in which cases sharers would not expect to be judged harshly based on the accuracy of the news. Some fake news relate to conspiracy theories ("FBI Agent Suspected in Hillary Email Leaks Found Dead in Apparent Murder-Suicide"), and recent work shows people high in need for chaos—people who might not care much about how society sees them—are particularly prone to sharing such news (Petersen et al., 2018). A few people appear to be so politically partisan that the perceived reputational gains of sharing politically congruent news, even fake, might outweigh the consequences for their epistemic reputation (Hopp et al., 2020; Osmundsen et al., 2020; Tandoc et al., 2018b). Some fake news might fall in the category of news that would be very interesting if they were true, and this interestingness might compensate for their lack of plausibility (e.g. "North Korea Agrees To Open Its Doors to Christianity", see Altay et al., 2020). Finally, the question of why people share fake news in spite of the reputational fallout assumes that the sharing of fake news is not anonymous. However, in some platforms, people can share news anonymously, and we would expect fake news to be more likely to flourish in such environments. Indeed, some of the most popular fake news (e.g. piz-zagate, QAnon) started flourishing on anonymous platforms such as 4chan. Their transition toward more mainstream, non-anonymous social media might be facilitated once the news are perceived as being sufficiently popular that one doesn't necessarily jeopardize one's reputation by sharing them (Acerbi, 2020). This non-exhaustive list shows that in a variety of contexts, the negative reputational consequences of sharing fake news can be either ignored, or outweighed by other concerns (see also, e.g. Brashier and Schacter, 2020; Guess et al., 2019; Mourao and Robertson, 2019). Beyond the question of fake news, our studies also speak to the more general question of how people treat politically congruent versus politically incongruent information. In influential motivated reasoning accounts, no essential difference is drawn between biases in the rejection of information that do not fit our views or preferences, and biases in the acceptance of information that fit our views or preferences (Ditto et al., 2009; Kunda, 1990). By contrast, another account suggests that people should be particularly critical of information that does not fit their priors, rather than being particularly accepting of information that does (Mercier, 2020; Trouche et al., 2018). On the whole, our results support this latter account. In the first three experiments reported here, participants treated politically congruent and politically neutral news in a similar manner, but not politically incongruent news. Participants did not lower their trust less when they were confronted with politically congruent fake news, compared with a politically neutral or politically congruent fake news. Participants did not ask either to be paid less to share politically congruent fake news compared to politically neutral fake news. Instead, participants failed to increase their trust when a politically incongruent real news was presented (for similar results, see, for example, Edwards and Smith, 1996), and asked to be paid more to share politically incongruent fake news. More generally, the trust ratings of politically congruent news sources were not higher than those of politically neutral news sources, while the ratings of politically incongruent news sources were lower than those of politically neutral news sources. Altay et al. 19 These results support a form of "vigilant conservatism," according to which people are not biased because they accept information congruent with their beliefs too easily, but rather because they spontaneously reject information incongruent with their beliefs (Mercier, 2020; Trouche et al., 2018). As for fake news, the main danger is not that people are gullible and consume information from unreliable sources, instead, we should worry that people reject good information and don't trust reliable sources—a mistrust that might be fueled by alarmist discourse on fake news (Van Duyn and Collier, 2019). Funding The author(s) disclosed receipt of the following financial support for the research, authorship and/ or publication of this article: This research was supported by the grant EUR FrontCog ANR-17-EURE-0017 and ANR-10-IDEX-0001-02 PSL, and by the CONFIRMA grant from the DGA. Sacha Altay received funding for his PhD thesis from the DGA. ORCID iD Sacha Altay © https://orcid.org/0000-0002-2839-7375 Supplemental material Supplemental material for this article is available online. References Acerbi A (2019) Cognitive attraction and online misinformation. Palgrave Communications 5(1): 15. Acerbi A (2020) Cultural Evolution in the Digital Age. Oxford: Oxford University Press. Allen J, Howland B, Mobius M, et al. (2020) Evaluating the fake news problem at the scale of the information ecosystem. Science Advances 6(14): eaay3539. Altay S, de Araujo E and Mercier H (2020) "If this account is true, it is most enormously wonderful": Interestingness-if-true and the sharing of true and false news. Available at: https:// psyarxiv.com/tdfh5/ Altay S, Majima Y and Mercier H (2020) It's my idea ! Reputation management and idea appropriation. Evolution & Human Behavior 41: 235-243. Altay S and Mercier H (2020) Relevance is socially rewarded, but not at the price of accuracy. Evolutionary Psychology 18(1): 1474704920912640. Bago B, Rand DG and Pennycook G (2020) Fake news, fast and slow : deliberation reduces belief in false (but not true) news headlines. Journal of Experimental Psychology: General. Epub ahead of print 9 January. DOI: 10.1037/xge0000729. Balmas M (2014) When fake news becomes real: combined exposure to multiple news sources and political attitudes of inefficacy, alienation, and cynicism. Communication Research 41(3): 430-454. Berriche M and Altay S (2020) Internet users engage more with phatic posts than with health misinformation on Facebook. Palgrave Communications 6(1): 1-9. Blaine T and Boyer P (2018) Origins of sinister rumors : a preference for threat-related material in the supply and demand of information. Evolution and Human Behavior 39(1): 67-75. Boyer P and Parren N (2015) Threat-related information suggests competence : a possible factor in the spread of rumors. PLoS One 10(6): e0128421. 20 new media & society 00(0) Brashier NM and Schacter DL (2020) Aging in an era of fake news. Current Directions in Psychological Science 29(3): 316-323. BuzzFeed (2016) Here Are 50 of the Biggest Fake News Hits on Facebook from 2016. Available at: https://www.buzzfeednews.com/article/craigsilverman/top-fake-news-of-2016 BuzzFeed (2017) These Are 50 of the Biggest Fake News Hits on Facebook in 2017. Available at: https://www.buzzfeednews.com/article/craigsilverman/340these-are-50-of-the-biggest-fake- news-hits-on-facebook-in. 341 BuzzFeed (2018) These Are 50 of the Biggest Fake News Hits on Facebook in 2018. Available at: https://www.buzzfeednews.com/article/craigsilverman/facebook-fake-news-hits-2018 Chambers S (2020) Truth, deliberative democracy, and the virtues of accuracy : is fake news destroying the public sphere? Political Studies. Epub ahead of print 2 April. DOI: 10.1177 /0032321719890811. Collins PJ, Hahn U, von Gerber Y, et al. (2018) The bi-directional relationship between source characteristics and message content. Frontiers in Psychology 9: 18. Corriveau KH and Harris PL (2009) Choosing your informant: weighing familiarity and recent accuracy. Developmental Science 12(3): 426-437. Del Vicario M, Bessi A, Zollo F, et al. (2016) The spreading of misinformation online. Proceedings of the National Academy of Sciences 113(3): 554-559. Ditto PH (2009) Passion, reason, and necessity : a quantity-of-processing view of motivated reasoning. In: Bayne T and Fernandez J (eds) Delusion and Self-Deception: Affective and Motivational Influences on Belief Formation. New York: Taylor & Francis, pp. 23-53. Duffy A and Ling R (2020) The gift of news : Phatic news sharing on social media for social cohesion. Journalism Studies 21(1): 72-87. Edwards K and Smith EE (1996) A disconfirmation bias in the evaluation of arguments. Journal of Personality and Social Psychology 71: 5-24. Effron DA (2018) It could have been true : how counterfactual thoughts reduce condemnation of falsehoods and increase political polarization. Personality and Social Psychology Bulletin 44(5): 729-745. Fischer I and Harvey N (1999) Combining forecasts : what information do judges need to outperform the simple average? International Journal of Forecasting 15(3): 227-246. Graham J and Haidt J (2012) Sacred values and evil adversaries : a moral foundations approach. Available at: https://www.semanticscholar.org/paper/Sacred-values-and-evil-adversaries%3A-A-moral-Graham-Haidt/6ba2b8ea7529302ebdb97d7ef02c43437fe86eda Graham J, Haidt J and Nosek BA (2009) Liberals and conservatives rely on different sets of moral foundations. Journal of Personality and Social Psychology 96(5): 1029. Grinberg N, Joseph K, Friedland L, et al. (2019) Fake news on twitter during the 2016 US Presidential election. Science 363(6425): 374-378. Guess A, Nagler J and Tucker J (2019) Less than you think : prevalence and predictors of fake news dissemination on Facebook. Science Advances 5(1): eaau4586. Guess A, Nyhan B and Reifler J (2020) Exposure to untrustworthy websites in the 2016 US election. Nature Human Behaviour 4: 472-480. Guo L and Vargo C (2018) "Fake News" and emerging online media ecosystem : an integrated intermedia agenda-setting analysis of the 2016 US Presidential Election. Communication Research Al'(2): 178-200. Hopp T, Ferrucci P and Vargo CJ (2020) Why do people share ideologically extreme, false, and misleading content on social media ? A self-report and trace data-based analysis of counter-media content dissemination on Facebook and Twitter. Human Communication Research. Epub ahead of print 19 May. DOI: 10.1093/hcr/hqz022. Iyengar S, Lelkes Y, Levendusky M, et al. (2019) The origins and consequences of affective polarization in the United States. Annual Review of Political Science 22: 129-146. Altay et al. 21 Knight Foundation (2018) Indicators of News Media Trust. Available at: https://knightfoundation. org/reports/indicators-of-news-media-trust/ Kunda Z (1990) The case for motivated reasoning. Psychological Bulletin 108: 480-498. Lazer DM, Baum MA, Benkler Y, et al. (2018) The science of fake news. Science 359(6380): 1094-1096. Lee CS and Ma L (2012) News sharing in social media : the effect of gratifications and prior experience. Computers in Human Behavior 28(2): 331-339. Lewandowsky S, Ecker UK and Cook J (2017) Beyond misinformation : understanding and coping with the "post-truth" era. Journal of Applied Research in Memory and Cognition 6(4): 353-369. Marchal N, Kollanyi B, Neudert L-M, et al. (2019) Junk News During the EU Parliamentary Elections : Lessons from a Seven-Language Study of Twitter and Facebook. Oxford: Project on Computational Propaganda, Oxford Internet Institute, Oxford University. Mercier H (2020) Not Born Yesterday: The Science of Who We Trust and What We Believe. Princeton: Princeton University Press. Mitchell A, Gottfried J, Fedeli S, et al. (2019) Many Americans Say Made-Up News is a Critical Problem That Needs to Be Fixed. Pew Research Center. Available at: https://www.journal-ism.org/2019/06/05/many-americans-say-made-up-news-is-a-critical-problem-that-needs-to-be-fixed/ Mourao RR and Robertson CT (2019) Fake news as discursive integration : an analysis of sites that publish false, misleading, hyperpartisan and sensational information. Journalism Studies 20(14): 2077-2095. Nelson JL and Taneja H (2018) The small, disloyal fake news audience : the role of audience availability in fake news consumption. New Media & Society 20(10): 3720-3737. Osmundsen M, Bor A, Bjerregaard Vahlstrup P, et al. (2020) Partisan Polarization Is the Primary Psychological Motivation behind "Fake News" Sharing on Twitter. Available at: https:// psyarxiv.com/v45bk/ Painter C and Hodges L (2010) Mocking the news : how The Daily Show with Jon Stewart holds traditional broadcast news accountable. Journal of Mass Media Ethics 25(4): 257'-21'4. Pennycook G and Rand DG (2019a) Fighting misinformation on social media using crowdsourced judgments of news source quality. Proceedings of the National Academy of Sciences 116(7): 2521-2526. Pennycook G and Rand DG (2019b) Lazy, not biased : susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning. Cognition 188: 39-50. Pennycook G, Epstein Z, Mosleh M, et al. (2019) Understanding and reducing the spread of misinformation online. Pennycook G, McPhetres J, Zhang Y, et al. (2020) Fighting COVID-19 misinformation on social media : Experimental evidence for a scalable accuracy nudge intervention. Psychological Science 31(7): 770-780. Peters MA (2018) Education in a post-truth world. In: Peters MA, Rider S, Hyvonen M and Besley T (eds) Post-Truth, Fake News. Singapore: Springer, pp. 145-150. Petersen MB, Osmundsen M and Arceneaux K (2018) A "Need for Chaos" and the sharing of hostile political rumors in advanced democracies. Available at: https://www.researchgate. net/publication/327382989_A_Need_for_Chaos_and_the_Sharing_of_Hostile_Political_ RumorsinAdvancedDemocracies Pew Research Center (2019a) Many Americans Say Made-up News Is a Critical Problem That Needs to Be Fixed. Available at: https://www.journalism.org/2019/06/05/many-americans- say-made-up-news-is-a-critical-problem-that-needs-to-be-fixed/ 22 new media & society 00(0) Pew Research Center (2019b) Political Independents : Who They Are, What They Think. Available at: https://www.pewresearch.org/politics/2019/03/14/political-independents-who-they-are-what-they-think/ Quand T, Boberg S, Schatto-Eckrodt T, et al. (2020) Pandemic news : Facebook pages of mainstream news media and the coronavirus crisis—a computational content analysis. Available at: https://dblp.uni-trier.de/rec/journals/corr/abs-2005-13290.html Rothbart M and Park B (1986) On the confirmability and disconfirmability of trait concepts. Journal of Personality and Social Psychology 50(1): 131. Schulz A, Fletcher R and Popescu M (2020) Are news outlets viewed in the same way by experts and the public ? A comparison across 23 European Countries. Reuters Institute Factsheet. Available at: https://reutersinstitute.politics.ox.ac.uk/are-news-outlets-viewed-same-way-experts-and-public-comparison-across-23-european-countries Skowronski JJ and Carlston DE (1989) Negativity and extremity biases in impression formation : a review of explanations. Psychological Bulletin 105(1): 131. Slovic P (1993) Perceived risk, trust, and democracy. Risk Analysis 13(6): 675-682. Sterrett D, Malato D, Benz J, et al. (2019) Who shared it? Deciding what news to trust on social media. DigitalJournalism 7(6): 783-801. Tandoc EC Jr (2019) The facts of fake news : a research review. Sociology Compass 13(9): el2724. Tandoc EC Jr, Lim ZW and Ling R (2018a) Defining "fake news" A typology of scholarly definitions. DigitalJournalism 6(2): 137-153. Tandoc EC Jr, Ling R, Westlund O, et al. (2018b) Audiences' acts of authentication in the age of fake news : a conceptual framework. New Media & Society 20(8): 2745-2763. The Media Insight Project (2016) A New Understanding : What Makes People Trust and Rely on News. Available at: http://bit.ly/lrmuYok Trouche E, Johansson P, Hall L, et al. (2018) Vigilant conservatism in evaluating communicated information. PLoS One. DOI: 10.1371/journal.pone.0188825. Van Bavel JJ and Pereira A (2018) The partisan brain : an Identity-based model of political belief. Trends in Cognitive Sciences 22(3): 213-224. Van Duyn E and Collier J (2019) Priming and fake news : the effects of elite discourse on evaluations of news media. Mass Communication and Society 22(1): 29-48. Vosoughi S, Roy D and Aral S (2018) The spread of true and false news online. Science 359(6380): 1146-1151. Walter N, Cohen J, Holbert RL, et al. (2019) Fact-checking : a meta-analysis of what works and for whom. Political Communication 37: 350-375. Waruwu BK, Tandoc EC Jr, Duffy A, et al. (2020) Telling lies together ? Sharing news as a form of social authentication. New Media & Society. Epub ahead of print 10 June. DOI: 10.1177 /1461444820931017. Yaniv I and Kleinberger E (2000) Advice taking in decision making : egocentric discounting and reputation formation. Organizational Behavior and Human Decision Processes 83: 260-281. Author biographies Sacha Altay is completing his PhD thesis at the Jean Nicod Institute, on the topic of misinformation from a cognitive and evolutionary perspective. Anne-Sophie Hacquin is a research engineer at the Jean Nicod Institute working on psychology and public policy. Hugo Mercier is a research scientist at the CNRS (Jean Nicod Institute) working on communication from a cognitive and evolutionary perspective.