The recent Department of Justice indictment of 13 Russian nationals for their interference in American politics from 2014 to 2016 has cast even more light on the subject of fake news on the internet and social media. Misleading, biased or out-and-out fabricated stories are being disseminated through platforms like Google, Facebook and Twitter.
Congress is mulling over the idea of trying to regulate these internet giants with regard to publishing or allowing fake news articles. Meanwhile, Facebook, Google, Twitter and other tech companies are already working on fighting the spread of fake news using technology.
Facebook and Google have begun to implement what they are describing as ‘trust indicators’ to help make news organizations and journalists more transparent and credible when displaying news articles on their sites.
This initiative comes from the Trust Project, a non-partisan group operating out of Santa Clara University with a goal of increasing media literacy and reducing misinformation in the media. Facebook begun testing the trust indicators in mid-February, with Google announcing it would be implementing them soon. Twitter expressed support for the Trust Project, but did not indicate if or when they would begin using trust indicators.
These trust indicators will display a number of pieces of information. They will indicate whether an article is news or a paid advertisement. They will link to other stories published by the journalist or media outlet. They will also provide a section where the publisher can document the sources for the claims made in their articles.
The hope is that these trust indicators, appended to news articles displayed on Facebook and Google, will provide some automatic fact checking and verification. Users would be able to better differentiate between credible sources and fake news.
While trust indicators might be a nice step toward providing some transparency, they don’t explicitly identify fake news stories in all cases. A more aggressive approach being taken to combat fake news is creating algorithms using machine learning to detect fake news.
A startup called Adverif.ai is testing a piece of software that uses machine learning to analyze news stories and news sources to identify fake or incorrect stories. This software relies on a combination of a large database of existing fake news stories and a number of other measures like scanning for headlines using too many capital letters or a headline which doesn’t match the text in the body of the story.
The algorithm has proven successful in detecting a decent number of fake news stories, though there are misses as well. The technology isn’t perfect yet, but it’s a step in the direction of using AI to identify and shut down fake news content automatically.
Another technological initiative targeting fake news is trying to create systems which are capable of mechanistically fact check stories in real time. Several startups and cybersecurity firms are working toward applications that can fact check claims made in online articles and provide users with the results of those fact checks as they read the article.
Facebook and Google have already been working with fact checking organizations to verify the information contained in articles appearing on their platforms. The ability to integrate that fact checking with an automated system would be the next step in neutralizing the power of fake news, especially in environments like social media where information and news is constantly churning through at rapid pace.
An intermediate step along these lines is currently being offered by Facebook in the form of a feature which displays ‘Related Articles’ by news stories. The concept behind this is to show other stories on a topic which might contradict the false information being spread by fake news stories, figuring that showing an alternate view at least informs the reader that the first article they read might not be the truth.
Facebook is also relying on its users to help it curb the spread of fake news on Facebook. Part of the process in Facebook’s work with independent fact checkers is evaluating articles based on how many times users flag them as fraudulent or false. If enough users flag them, they get passed on to the fact checking team.
There are some dangers and limitations involved with this method, especially in trying to combat targeted fake news stories aimed at specific, vulnerable demographics. But Facebook is indicating that this is only a part of the way it is developing a strategy toward battling fake news.
The rise of social media has developed into a breeding ground for fake news articles. Now, the creators of those platforms are joining with others to use technology to combat the problem they unwittingly abetted.