Remember when a shark swam through the streets of New Jersey during Hurricane Sandy? Actually, it didn’t. But wouldn’t it have been handy to have been able to check the veracity of those Garden State shark reports without going through the office of Gov. Chris Christie?
An international group of researchers funded by the EU is working on a lie detector for social media that could make it easier to separate online truth from lies and the lying liars who tell them (apologies to Al Franken).
Named Pheme after a Greek mythological figure who «pried into the affairs of mortals and gods, then repeated what she learned, starting off at first with just a dull whisper, but repeating it louder each time, until everyone knew,» the system will collate a variety of data to assess in real time how likely it really is that a baby mermaid was just born in the Philippines or snakes invaded a Pennsylvania casino.
Pheme will, for example, gauge the authority of sources such as news outlets, individual journalists, alleged experts, potential eyewitnesses, and automated bots. It will take into account the histories of social-media accounts to help spot those that have spread false rumors. And it will search for sources that corroborate or deny a given piece of information and plot how conversations about the topic evolve on social networks.
The online spread of rumors deemed too dangerous for China
Fake Google employee’s fight with protesters some wish was true
The results will thus focus on the quality of the information, unlike similar analytics tools that focus more on language. Software out of Israel, for instance, scours online text for words, phrases, and even metaphors that might indicate depression.
The Pheme results will be displayed in a visual dashboard that should at least give some sense, if not a definitive ruling, of where a rumor falls on the pure-poppycock-to-totally-true scale.
«We can already handle many of the challenges involved [on the Internet], such as the sheer volume of information in social networks, the speed at which it appears and the variety of forms, from tweets, to videos, pictures and blog posts,» Kalina Bontcheva, a researcher from the University of Sheffield’s Department of Computer Science, said in a statement. «But it’s currently not possible to automatically analyze, in real time, whether a piece of information is true or false and this is what we’ve now set out to achieve.»
Not all rumors created equal
According to the University of Sheffield, Pheme will classify online rumors into four types: speculation — such as whether interest rates might rise; controversy — as over the MMR vaccine; misinformation, where something untrue is spread unwittingly; and disinformation, where falsehoods are disseminated with malicious intent.
There are definitely categories missing here, like April Fools’ Day falsehoods and «Star Wars» casting rumors. But of course not all online rumors are light entertainment or the harmless sort of speculation that long precedes any iPhone or Samsung Galaxy launch. Sometimes rumors can have a real impact, as with fake announcements of people’s deaths and doctored storm-related images that can seriously scare an already-jumpy city.
Pheme will cost an estimated 3.5 million British pounds (around $5.8 million) and evolve over the course of three years, being tested during that time both by the online arm of the Swiss Broadcasting Corporation, and the Institute of Psychiatry at King’s College London. There, researchers plan to investigate online discourse about recreational drugs, mental-health concerns, and teenage self-harm and how those discussions translate to patients’ real-life behavior.
In addition to Sheffield and King’s College London, other universities participating in Pheme include Saarland in Germany, Modul University Vienna, and the UK’s University of Warwick, where a professor worked with the London School of Economics and The Guardian’s interactive team to manually analyze the spread of rumors on Twitter during the 2011 London riots.
Meanwhile, while you’re waiting for Pheme to appear, there’s always Snopes — and common sense.