There is no end to examples of fake news cited by Wikipedia articles. The list of premature obituaries,for example, has grown considerably since the dawn of internet hoaxes thanks to how easily misinformation can promulgate.
Upworthy, KQED, Snopes, the Trust Project, and even the U.S. Department of Homeland Security all recently convened in Boston to answer one monumental question that’s been quietly looming over our heads: in a world bursting with free-flowing content, how do we stop the spread of misinformation?
In February, the Wikimedia Foundation joined a handful of media organization at the MIT Media Lab to lend their expertise at MisinfoCon, a summit and hack-a-thon dedicated to addressing the ever-growing problem of fake news online.
Propaganda, though often considered a bygone tool of marketing, is nothing new. Native advertising presents ads disguised as legitimate news articles. Clickbait disseminates through consumers trading false facts that appear to the naked eye to be true and verified. Seemingly innocuous memes go viral on social media, some containing falsehoods that churn furiously through news feeds. With more people getting their news from the internet than anywhere else, these are the problems most MisinfoCon attendees came to solve.
Media literacy organization First Draft demonstrated their new Google Chrome extension, NewsCheck, which lets viewers investigate an image or video’s authenticity together by completing a survey checklist and assessing the results. The Berkeley Institute for Data Science (BIDS) designed software to help anyone on the internet collaborate and fact check with others. During the summit, they referenced Wikipedia’s community of volunteer editors to better inform their workflow.
First Draft and BIDS already have credibility indicators in place, as do Wikipedia’s editors. While more educators, librarians, scientists, and engineers chipped away at their projects, a small cohort broke away to look at what makes content credible for online news as we know it.
All digital content contains some type of metadata—timestamps, file sizes, meta tags, etc. If we could attach metadata to any online content that would indicate its credibility, what would that include? We asked this and many other questions during the breakaway session, and came to several rough conclusions vaguely similar to Wikipedia’s guidelines for verifiability:
- Origin and motivation: Who provided the claim, and when?
- Byline: Who is taking credit for the claim’s research and writing?
- Sourcing: Is it possible to track down the writer’s sources? Are they clearly attributed?
- Cost of verification: Who does this article benefit financially?
- Tone and typology: Does the content intend to inform, or convince? Is it descriptive or prescriptive?
Prototyping the “metadata of news” is still in the works. Wikipedians have been refining indicators for credibility for sixteen years, laying solid groundwork for the rest of the web at large. Organizations like the News Literacy Project are training middle and high school students to utilize media literacy skills, giving them the tools to investigate dubious claims, and encouraging them to teach older generations.
Although nearly every project revolved around social media, representatives from the two biggest platforms in the game were noticeably absent: Facebook and Twitter. Despite this, many tools unveiled at MisinfoCon have be used across platforms and across nations. Some projects proposed ideas to cut off revenue and incentives to sites promoting fake news, instead rewarding organizations that prioritize newsroom diversity. Another, Hypothe.sis, essentially adds a “Talk page” layer to every accessible page, from academia to memes allowing critical analysis of all the web has to offer. Even the fake bits.