Previous efforts at using the crowd to identify misinformation have typically focused on allowing users to flag content that they encounter on platform and believe is problematic, and then algorithmically leveraging this user flagging activity ( 23). To address these shortfalls and increase the utility of fact-checking, we ask whether the wisdom of crowds is sufficiently powerful to allow laypeople to successfully tackle the substantially harder, and much more practically useful, problem of rating the veracity of individual articles. This is highly problematic given that much of the promise of the internet and social media as positive societal forces comes from reducing the barrier to entry for new and specialized content producers. Thus, using publisher-level ratings unfairly punishes publishers that produce accurate content but are either new or niche outlets. Second, publisher trust ratings are largely driven by familiarity: People overwhelmingly distrust news publishers that they are unfamiliar with ( 20, 21). This interferes with the effectiveness of both labeling and down-ranking problematic content. In other words, publisher-level ratings are too coarse to reliably classify article accuracy. Thus, using publisher-level ratings may lead to a substantial number of false negatives and false positives. First, there is a great deal of heterogeneity in the quality of content published by a given outlet ( 22). These publisher-level ratings, however, may have limited utility for fighting online misinformation. Most closely related to the current paper, crowd ratings of the trustworthiness of news publishers were very highly correlated with the ratings of professional fact-checkers ( 20, 21). Furthermore, cognitive reflection, political knowledge, and Democratic Party preference are positively related to agreement with fact-checkers, and identifying each headline’s publisher leads to a small increase in agreement with fact-checkers. The average ratings of small, politically balanced crowds of laypeople (i) correlate with the average fact-checker ratings as well as the fact-checkers’ ratings correlate with each other and (ii) predict whether the majority of fact-checkers rated a headline as “true” with high accuracy. Examining 207 news articles flagged for fact-checking by Facebook algorithms, we compare accuracy ratings of three professional fact-checkers who researched each article to those of 1128 Americans from Amazon Mechanical Turk who rated each article’s headline and lede. We explore a solution to these problems: using politically balanced groups of laypeople to identify misinformation at scale. Furthermore, some distrust fact-checkers because of alleged liberal bias. Professional fact-checking, a prominent approach to combating misinformation, does not scale easily.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |