Covid-19, ‘fake news’ and the future of platform regulation

3 February 2021

Irini Katsirea - Reader in International Media Law, University of Sheffield

The term ‘fake news’ first rose to ubiquity in public discourse during the 2016 US presidential election and the Brexit referendum. Four years on, vaccine hoaxes and other conspiracy theories related to the current pandemic are still rampant on social media. What is the past and future of the elusive term of ‘fake news’? Is self-regulation by social media companies an effective and human rights compliant solution? And how can these companies be held to account?

Part 1 of this two-part series examines Covid-19 and the not-so-new phenomenon of ‘fake news’.

From ‘Lügenpresse’ to ‘fake news’

The term ‘fake news’ is notoriously vague and highly politicised. On the one hand, it has been used to describe foreign interference in elections and referendums, sparking fears over the threat posed to democracy. On the other hand, it has been employed by the US President but also by nationalist, far-right parties such as the German party Alternative for Germany (AfD) for political advantage. The Trump administration and nationalist parties who lambast the mainstream media in their tweets, election campaigns and demonstrations join a long tradition of press victimisation.

In the First World War, the notion of ‘Lügenpresse’ was enlisted in the effort to discredit reporting by the enemy. Before the NS party’s seizure of power, this concept was weaponised against the ‘unpatriotic’ press of the Weimar Republic, which failed to stand up to the demeaning Versailles Treaty; later it was used against foreign media, not least by the chief Nazi propagandist Joseph Goebbels. It is against this backdrop of historic and recent abuse of the term ‘fake news’ for political ends that the DCMS recommended that the term ‘fake news’ be rejected, and that an agreed definition of the terms ‘misinformation’ and ‘disinformation’ be put forward. In response to this recommendation, the Government distinguished between disinformation as the ‘deliberate creation and sharing of false and/or manipulated information that is intended to deceive and mislead audiences, either for the purposes of causing harm, or for political, personal or financial gain’ and misinformation as the ‘inadvertent sharing of false information’.

The distinction between these two types of information challenges draws on Wardle and Derakshan’s typology of ‘information disorder’. It attempts to separate inaccurate content on the basis of the disseminating agent’s motivation. Indeed, intent to deceive is key when attempting to draw a line between calculated falsehoods and legitimate forms of political expression such as ‘news satire’, which ordinarily aim to mock, not to deceive. This distinction is pertinent in so far as the methods used to tackle different forms of untruthful expression may need to vary depending on the motivation of the actors involved.

Media literacy is a long term solution for misinformation, while blocking financial incentives is a possible remedy against the spread of disinformation. However, both disinformation and misinformation can potentially pose similar risks. In the context of the current pandemic, a report by the European External Action Service concluded that misinformation was ‘the more pressing challenge’ for public health. While the terms of ‘misinformation’ and ‘disinformation’ are less politically loaded and more amenable to a definition than the term ‘fake news’, one needs to pay heed to the fact that the term ‘fake news’ is likely here to stay as ‘part of the vernacular that helps people express their frustration with the media environment’. The problem of ‘fake news’ rose to renewed prominence in the context of the Covid-19 pandemic.

Curiouser and curiouser: From the Covid-19/5G nexus to microchip carrying vaccines

Since the start of the pandemic, there has been evidence of state and non-state actors spreading false stories about the origins and spread of the disease; its symptoms, diagnosis and treatment; its financial and societal impact; as well as the measures taken to contain it, including vaccination.

In February 2020, the World Health Organisation raised the alarm about the emergence of a so-called ‘infodemic’ as a result of the circulation of misleading information about Covid-19. Some of the key false stories that gained considerable traction on social media concerned the supposed link between Covid-19 and the 5G network, and the advice on self-medication, inter alia by way of toxic chemicals. The former allegation inspired numerous arson attacks on 5G infrastructure across Europe, while the latter led to  hundreds of deaths in Iran and multiple instances of poisoning in other parts of the world. The recent roll-out of Covid-19 vaccines in the UK and the US has led to a new wave of inaccurate claims, ranging from narratives about microchips contained in the vaccines, to allegations about ‘disappearing needles’ and unproven dangerous effects of the vaccines.

Many of these false theories have been multiplying on social media. Platforms, such as Facebook and Twitter, have ramped up their effort to flatten the ‘infodemic’ curve. The question remains in how far such measures can stand up to free speech scrutiny, including as regards their effectiveness. This question will be addressed in the second part of this blog post.

Related posts

Small screw-top bottle containing a liquid labelled 'Coronavirus Vaccine. COVID-19. Injection Only'.

14 March 2021

While the development of vaccines has given the world hope, success will depend upon closer global cooperation and the waiving of intellectual property protections.

Read more

18 December 2020

As we slowly emerge from the second lockdown and prepare with hope for the new year, I though it timely [...]

Read more