Part 2: How to reign in Big Tech and cure ‘fake news’ without killing free speech

10 February 2021

Irini Katsirea - Reader in International Media Law, University of Sheffield

The term ‘fake news’ first rose to ubiquity in public discourse during the 2016 US presidential election and the Brexit referendum. Part 2 of this blog series explores how big tech can be held to account in relation to ‘fake news’.

Read Part 1 of this blog series

The response of Big Tech and free speech

Technology companies have responded to the challenge of the ‘infodemic’ by adopting a two-pronged strategy of promoting accurate information, and of flagging or removing false claims and conspiracy theories with the help of a network of certified third-party fact-checkers. Facebook, for example, places educational pop-ups from the WHO and national health authorities at the top of result pages, while removing conspiracy theories that have been ‘flagged by leading global health organizations and local health authorities that could cause harm to people who believe them’, including about the vaccines. The rather broad focus is on claims that are ‘designed to discourage treatment or taking appropriate precautions’, as well as on ‘claims related to false cures and prevention methods… or claims that create confusion about health resources that are available’.

The promotion of accurate information in cooperation with the WHO and national health ministries is a promising way of countering misinformation. More caution is called for as regards the removal of false claims and conspiracy theories. Freedom of expression does not only protect truthful information, but may also extend to untruthful statements. It is important that individuals feel empowered to discuss their concerns about the spread of the disease and to criticise the response of public authorities, especially in view of the political uncertainty as to what an optimum response should entail. A telling example is the YouTube’s short-lived ban of TalkRadio for allegedly breaching its policies on medical misinformation related to Covid. Talk Radio hosts public debates in which opinions critical of lockdown policies have regularly been expressed. YouTube reversed its ban of TalkRadio within twelve hours amidst criticism of its policies.

Public health is one of the narrow grounds for the restriction of free speech. However, under the principle of proportionality it is imperative that there is a direct and immediate link between the expression and the alleged threat, and that the chosen method to restrict expression is necessary and proportionate (D. Kaye, ‘Disease pandemics and the freedom of opinion and expression’ (UN General Assembly, 23 April 2020) ).

In the US context, the removal of false claims should be reserved to such speech that is likely to incite imminent lawless action, as recognised in First Amendment doctrine. In cases that do not meet this threshold, the flagging of clearly inaccurate information is a more proportionate response compared to its outright removal. Still, the identification of such false information with the help of independent fact-checkers also bears risks. The characterisation of a fact-checking service as ‘independent’ is not cast in stone and can become a matter of contention. Facebook has cooperated with partisan fact-checkers in the past, and other platforms might tread the same path. Nor is it implausible to assume that erstwhile neutral fact-checkers could become subject to media capture.

Even greater risks are posed by the reliance on automated content moderation as well as appeal and review processes. Such recourse to automation has been increasingly resorted to recently as a result of depleted workforces due to the pandemic. These intricate issues raise the question whether social media platforms should be held accountable through a code of practice or legislation, and how the balance with freedom of speech should be struck.

Big Tech accountability and free speech

In recent years, there has been a reconsideration of the continuous validity of the liability exemptions, that have facilitated the platforms’ unbridled growth, especially as regards potentially dangerous or illegal content. In the EU, the revision of the rules concerning online platforms is discussed in the context of the Digital Services Act, which is intended to act ‘as a standard-setter at global level’, underpinned by ‘due-diligence obligations’ and backed by a ‘new national and European regulatory and supervisory function’. It reflects the perceived need to harmonise the existing unwieldly ‘patchwork of national measures’, such as the German Network Enforcement Act or the French Avia Law, and to possibly influence the forthcoming UK Online Harms bill. At the same time, the fact remains that platforms do not ordinarily actively curate content in the way editorial desks would. Social media platforms rely on their users to post or share content, and search engines depend on websites to make their offering accessible to them. Their editorial decision-making is mostly related to the organisation of content rather than its production. This difference needs to be taken into account when balancing increased accountability with the right to freedom of speech.

Blueprints for an effective solution

An effective solution would need to focus on the organisation of content rather than its production, building on the experience gained by audiovisual regulators under the Audiovisual Media Services Directive rules for video-sharing platforms. The organisation of content by social media platforms is a focal point of the German Interstate Media Treaty (Medienstaatsvertrag, MStV), which has been adopted after lengthy negotiations.

The new Media Treaty is the first legislative attempt in Europe to regulate social media platforms’ algorithms for diversity and transparency. The extent to which the Treaty’s requirements will succeed in penetrating the opacity of algorithms, and in delivering information about the aggregation, selection and presentation of contents that is intelligible, yet detailed enough to be useful, remains to be seen. It is important that such transparency obligations also extend to platforms’ moderation policies, including those on misinformation. The criteria on the basis of which platforms police users’ content, as well as their user notification and appeal procedures need to be human rights informed and subject to scrutiny.

Notably, the new Media Treaty also extends journalistic due diligence obligations to professional journalistic-editorial services, which regularly contain news or political information, such as blogs of a non-private nature. Greater transparency for and oversight over platforms’ content policies coupled with greater supervision of online news providers are a starting point towards tackling the current information disorder. The findability of public interest content as well as the provision of financial support for and partnership with trustworthy news media also provide a powerful bastion against misinformation.

An earlier iteration of this blog post was originally published on the ILPC Spotlight Series blog.

Related posts

3 February 2021

Part 1 of this blog series asks what is the past and future of the term 'fake news'? Is self regulation by social media companies an effective solution?

Read more

22 October 2020

Despite growing concern about the power of Big Tech, regulators have struggled to hold these companies and their executives to account. Here's why.

Read more