Trump, Facebook, democracy and rights: ‘how to handle free speech in an age of information chaos’

by Marc Limon, Executive Director of the Universal Rights Group Democracy, Misinformation, fake news, and hate speech

Writing in the Guardian on 5 May, Alan Rusbridger, a former newspaper editor and now member of the Facebook Oversight Board, set out in stark terms the dilemma facing the Board as it reviewed the social media giant’s decision, last January, to ban then President Donald Trump from its platform.

‘On 6 January, he was the president of the United States: probably the most powerful man in the world. He should be free to speak his mind, and voters should be free to listen. But he was also a habitual liar who, by the end of his term, had edged into repudiating the very democracy that had elevated him.’

That was, in short, the problem that Donald Trump embodied. On the one hand, he, like all Americans, enjoyed the right to freedom of expression and opinion, and the US electorate had the right to hear him. On the other hand, ever since his defeat to Joe Biden in November, Trump had regularly used his unique platform as President to spread disinformation about the election (‘stop the steal’) and to incite hatred (and, it would turn out, violence) against his political opponents.

Notwithstanding the terrible consequences of Trump’s lies and incitement – an attack on the US Capitol by an angry mob – it still came as a surprise to many observers and human rights experts when Facebook first removed two of his posts, and then (along with Twitter) banned him entirely. Under America’s traditional – and, in my view, extremist – interpretation of free speech, the threshold for banning expression is extremely high. It is very nearly a case of ‘anything goes.’ Instead of restricting speech, according to generations of US politicians and lawyers, democracies should rely on societal checks and balances – in short, ‘good speech will drown out bad speech.’ Within this broader picture, political speech should, it was believed, be particularly protected – so voters know who their leaders are, and can hold them to account for what they say and promise.

Yet, on 7 January, Facebook took its decision – a decision with significant implications not only for US democracy (Trump’s voice has largely been silenced without access to popular social media platforms), but also for politicians (especially populist politicians) around the world (would they too be banned if they repeatedly lied or spread disinformation?) and international human rights law and policy. On the last point, although the US’ extreme position on freedom of speech has increasingly become an outlier at the UN, it has nonetheless continued to exert a strong gravitational pull on international human rights law (soft and hard) as well as on the views and positions of relevant UN mechanisms (e.g., the Special Rapporteur on freedom of expression) and free speech NGOs (e.g., Article 19).

Are US attitudes shifting?

Because of this gravitational pull, any shift in US attitudes towards free speech, and acceptable limitations thereto, has major implications for the UN Human Rights Council and the wider international community. So, is America shifting its position?

There are a number of reasons to believe it is.

A first is – simply – the unprecedented nature of Facebook’s and Twitter’s decisions to ban President Trump.

American social media platforms have long presented themselves as venues for unfettered free expression. A decade ago, Twitter employees used to brand the start-up as ‘the free speech wing of the free speech party.’ In late 2019, Mark Zuckerberg gave an address defending Facebook’s allegiance to First Amendment principles—including his oft-stated belief that platforms should not be ‘the arbiters of truth.’

Yet, as Eyelyn Douek, a doctoral student at Harvard Law School and an affiliate at the Berkman Klein Center for Internet and Society, recently explained in an interview with Wired magazine, the increasingly interventionist policies of the social media giants during the 2020 Presidential election (e.g., warning labels on many of Trump’s posts), especially when faced with trying to balance the right to free expression with the democratic integrity of the American republic, together with their actions to prevent malicious disinformation during the COVID-19 pandemic, have made a mockery of those early claims.

The pandemic and the election have, according to Douek, ‘exposed the hollowness of social media platforms’ claim to American-style free speech absolutism.’ It is time, she continued, to recognise the fact that ‘the First Amendment-inflected approach to online speech governance that dominated the early internet no longer holds. Instead, platforms are now firmly in the business of balancing societal interests.’

Instead, she believes, the US, including its social media companies, is moving in the direction of many of the country’s international partners – towards what she calls a ‘proportionality approach.’ This approach starts with the proposition that speech should be protected, but then goes on to recognise that in any functioning democracy there are certain circumstances in which speech needs to be restricted, although those restrictions should be carefully circumscribed – according to strict criteria. Those criteria might include, for example, (according to Douek): ‘what’s the reason that you’re restricting that speech? It can’t just be ‘I don’t like it,’ it has to be some sort of compelling interest. And then the question has to be, are you restricting it as little as possible? […] And, furthermore, the way in which you restrict it has to be actually effective at achieving the aim that you’re saying you restricted it for.’ But, if those criteria are met (for example, where a Tweet that spreads dangerous disinformation about COVID-19 treatment is ‘tagged’ by Twitter), then speech can be restricted.

A second is that senior US politicians (at least Democratic ones), including President Joe Biden, also seem to be shifting towards a more interventionist or ‘proportionalist’ approach. A good example of this comes in the shape of former Secretary of State and presidential candidate, Hilary Clinton. In a recent interview with the Guardian newspaper, (and perhaps with an eye on UK Prime Minister Boris Johnson as much as on Trump), Clinton warned of the danger to democracy of lies flourishing online, and called for ‘big tech’s wings [to] be clipped.’

No doubt partly spurred by the fact that her own bid for the White House was – perhaps fatally – undermined by ‘a tidal wave of fabricated news and false conspiracy theories,’ much of it generated by hostile (and non-democratic) third countries, Clinton has called for a ‘global reckoning’ with disinformation. This must include, she says, a ‘reining in [of] the power of big tech.’

The breakdown of a shared truth, she argues, ‘and the divisiveness that surely follows, poses a danger to democracy at a moment when China is selling the conceit that autocracy works.’

There must therefore be an ‘American reckoning, [indeed] a global reckoning, with […] disinformation,’ and with the ‘monopolistic power and control, [and] lack of accountability, that the platforms currently enjoy.’

The threat to democracy from the ‘alternative realities’ spread via social media, is particularly acute, according to Clinton, because ‘China, a fast-rising power, promotes an alternative model to the world. Biden has suggested more than once that future generations will analyse this era and judge whether autocracy or democracy was more successful.’ He is, she believes, correct.

According to Clinton, there is little doubt that the Chinese are busy, internationally, ‘making the […] case that democracy is messy, things take too long, people are in and out of office, there’s no continuity, you can’t have the kind of fixed goals that can be moved forward in a socially cohesive way.’ China is therefore saying to politicians around the world ‘choose us’ – and our model of ‘controlled democracy’ (otherwise known as autocracy).

Facebook’s decision

The shift towards a ‘proportionality approach’ to free speech in a democracy is also evident in Facebook Oversight Board’s verdict on whether to uphold or reverse Donald Trump’s indefinite ban. On 7 January it published its own verdict on Facebook’s original decision: Facebook was both right and wrong. Right to remove his 6 January words and right, the following day, to ban the president from the platform. But wrong to ban him indefinitely.

The key word here is ‘indefinitely’ – if only because Facebook’s own policies do not appear to permit it. The Oversight Board judgment doesn’t mince its words: ‘In applying a vague, standardless penalty and then referring this case to the Board to resolve, Facebook seeks to avoid its responsibilities. The board declines Facebook’s request and insists that Facebook apply and justify a defined penalty.’ According to Rusbridger, the ball is ‘squarely back in Facebook’s court.’

What Facebook has to do now, according to the Board (and which the company is bound to implement), is to re-examine the arbitrary penalty it imposed on 7 January. It should take account of the gravity of the violation and the prospect of future harm.

A huge moment for the UN human rights system

Diplomats at the UN Human Rights Council crave relevance and impact. In the (momentous) case of Facebook’s (and Twitter’s) decision to first censure and then ban Trump (and thus help defend US democracy while protecting legitimate free expression) the Council and the wider UN human rights system, have undoubtedly secured both.

In reaching its decision, at the same time as looking at Facebook’s own values, content policies and community standards, and reviewing a submission sent on behalf of Trump himself, the Oversight Board – crucially – also sought to apply an international human rights lens in trying to balance freedom of expression with possible harms.

In the Trump case, it looked at the UN Guiding Principles on Business and Human Rights, which establish a voluntary framework for the human rights responsibilities of private businesses. It also considered the right to freedom of expression as set out in articles 19 and 20 of the International Covenant on Civil and Political Rights – as well as the qualifying articles to do with the rights to life, security of person, non-discrimination, participation in public affairs and so on.

Importantly, the Board also considered the 2013 Rabat Plan of Action, which attempts to identify and control hate speech online.

Moreover…

In addition to the Board’s ruling on the original and ‘indefinite’ bans, it also sent Facebook a number of policy advisory statements. One of these concentrated on the question of how social media platforms should deal with ‘influential users’ (such as, for example, political leaders).

Commenting on this policy advisory statement, Rusbridger said: ‘Speed is clearly of the essence where potentially harmful speech is involved. While it’s important to protect the rights of people to hear political speech, if the head of state or high government official has repeatedly posted messages that pose a risk of harm under international human rights norms, Facebook should suspend the account for a determinate period sufficient to protect against imminent harm.’

The Board also called upon Facebook to carry out a comprehensive review of its potential contribution to the narrative around electoral fraud and in the exacerbated tensions that culminated in the violence on 6 January.

The advisory statement makes clear that ‘this should be an open reflection on the design and policy choices that Facebook has made that may enable its platform to be abused.’ As Rusbridger has noted, this is a ‘not-so-coded reference to what is shorthanded as ‘The Algorithm.’’

Social media, he continued, is still in its infancy. ‘Among the many thorny issues we periodically discuss as a Board is, what is this thing we’re regulating? The existing language – ‘platform,’ ‘publisher,’ ‘public square’ – doesn’t adequately describe these new entities. Most of the suggested forms of more interventionist regulation stub their toes on the sheer novelty of this infant space for the unprecedented mass exchange of views.’ In the end, Rusbridger concludes:

The OSB is also taking its first steps. The Trump judgment cannot possibly satisfy everyone. But this 38-page text is, I hope, a serious contribution to thinking about how to handle free speech in an age of information chaos.

 

Share this Post