In many ways 2017 has been a year where little “original” news events have happened, but rather where we have been living through the aftermath of 2016. Whether that be because of Brexit on this side of the pond, or the Buffoon-in-Chief on the other, we’re still wading through the repercussions of what voters have destined for us last year.
One of the main problems we’re still grappling with is that of online “fake news”, which we now know helped drive both the Brexit and Donald Trump campaigns to victory with alarming success. These were stories concocted to create plausible arguments that could spread wildly on social media and influence political decision making among thousands of undecided voters that swung both the EU referendum and last year’s US Presidential election.
This fake news phenomenon was understood early on by agents of the Russian Government, who it now emerges used thousands of bots to amplify these fake news stories and to give them undue prominence in people’s newsfeeds in the run up to major votes. This will have reinforced many people’s beliefs in these fake stories and helped them make up their mind on false pretences about how they should vote. Russian interference in the US election is already under investigation by former FBI Director Robert Mueller, and there has been some great journalism done by Carole Cadwalladr on the efforts of Russia to influence opinion on our shores.
Even though there have been several major elections in Europe this year, in France, Germany and of course here in the UK, we still haven’t been able to effectively identify and eradicate fake news.
As many as a quarter of posts shared in the run-up to the final round of the French Presidential Election were estimated to be fake news, which may be down to the fact that finalist Marine Le Pen was running on a far-right and quasi-Moscow-backed platform.
Fake news is still causing issues outside of election cycles though. The most recent example of was the story of a House of Commons vote last week where MPs voted down a part of the Brexit withdrawal bill that stated that animals have emotions and sentience. Being a nation of animal lovers, social media was awash with posts about this issue, with over 500,000 signing petitions against the move. These stories were then picked up by major outlets such as The Independent and The Evening Standard.
The problem is that story wasn’t true.
The debate was about animal rights in a post-Brexit UK, but MPs time and time again argued that animals are sentient and can feel pain. It certainly wasn’t as strongly-worded as some of the stories made it out to be.
The problem with this fake news example, like many others, is that once this depth of feeling exists among the people and opinions are formed on the issue – it’s very hard for those attitudes to be reversed. Government ministers have already denied that they voted against animal sentience and have said they’ll look at implementing it in legislation in future, but that’s not what millions of people will have as a first impression of the issue.
And one of the reasons why they’ll feel that way is because the story was shared not just by media outlets, but by friends and family on Twitter and Facebook.
People don’t find it easy to trust media in many instances now, and the distrust of politicians has been with us through the ages.
What social media has brought to the table though is something of a “social proof”. We trust the people we follow and the people that we’re friends with, and if they say something or post something we’re inclined to believe it’s true. We don’t necessarily question their sources in the way we would of journalistic work, and this in some cases mean the information we get can be wrong, or in the worst-case scenario malicious.
That’s why modern fake news is so potent, because it preys upon our natural inherent belief in those around us, and uses this to spread lies and rumours in a way that could never be done before.
Combating this is not easy, and it’s one that major internet companies are wary of doing. Facebook, Google and Twitter have made their millions and billions off a simple premise – showing users what they want to see. Now after years of development they are now being forced into a more guiding role, in shaping what user should see. It’s not a natural fit for them, and requires a reimagining of their role within the internet. Facebook, Google and Twitter have all unveiled their findings of what happened in 2016 and taken steps to prevent the issues occurring again – but all have prefaced their attempts by saying they can’t fully stop fake news because of the free-flowing nature of their platforms.
Some cry censorship over these changes, but this doesn’t change the availability of the material or the opinions voiced in them – just the access. Changing the algorithms of social networks to show fewer fake news posts doesn’t mean they can’t be found, just that they won’t be seen by millions of people. This is no different to what media organisations do at present – lending their newspapers, airwaves or websites to the stories they think relevant to the people. Assuming that Facebook, Google and Twitter aren’t part of the modern news setting complex would be a little naïve in 2017 – so it’s time that we treat them as such.
We need to have faith and belief in what we see on social media in the same way that we do in newspapers, if we are truly to use them as a modern news medium. Whether this is by automatically fact-checked stories, source attribution models or teams of journalists at the “big three” scouring through reports – something needs to be done. The best way I see is of a simple reporting mechanism through which people can flag a story as fake and have it reviewed quickly for its accuracy – it works for existing ads on online platforms, so I see no reason why it can’t for news items.
There’s no elections on the horizon in the UK for a while, but in the US there are decisive midterms next November that can shape the second half of the Trump Presidency. That means there’s only months left for media organisations, and the internet giants, to solidify some real processes of dealing with fake news and to ensure that voters are seeing genuine information on their newsfeeds. Democracy relies on voters making informed choices, and whether Russian-backed or not, fake news is a threat to democracies everywhere and fake news is an issue that needs to be taken seriously if we’re to keep our democracies healthy, free and fair.
If you’re interested in learning more about fake news in the 2016 US election, there’s a great article on the subject: “Social Media and Fake News in the 2016 election” by Hunt Allcott and Matthew Gentzkow