Incitement and insurrection in the US underscore need for universal norms on hate speech and disinformation in the digital age

by Marc Limon, Executive Director of the Universal Rights Group and Amanda Gu, Universal Rights Group NYC Blog, Blog, Contemporary and emerging human rights issues

The attack on the United States Capitol on 6 January was live streamed on every major internet platform, an act of violence recorded and organised online. Digital technology companies quickly suspended related accounts, adding to a global debate on the parameters of freedom of expression in the digital realm. Given the gravity of the situation the fact that social media was used in the attempted overthrow of one of the world’s most stable democracies negotiating the parameters of freedom of expression in the digital age should be brought to the top of the international human rights agenda.

The impact of digital technology in the creation and distribution of information cannot be underestimated it is where 55% of Americans get their news. Prior to the election, Facebook and Twitter attempted to temper misinformation by putting up fact checks and flagging posts with misinformation labels. Facebook pledged to introduce a new oversight board in hopes of creating an international human rights approach to content moderation. After the 6 January insurrection, former President Donald Trump was banned from Twitter, Facebook, and Instagram indefinitely; while Reddit, Twitch, Shopify, YouTube, Snapchat, Tiktok, and Discord all started removing far-right affiliated accounts and hashtags used to incite violence. Google and Apple even decided to suspend Parler, a far-right social networking app that promoted extremism and conspiracy theories, from their stores, while Amazon withdrew from the app’s web hosting service. Recent surveys show that these steps have led to a 73% drop in misinformation about election fraud.

All this is a far cry from the long-standing US position on freedom of expression – namely that the best antidote to harmful expression (e.g., ‘hate speech’) or misinformation is more information, and that a very high threshold (e.g., ‘incitement to imminent violence’) must be reached before any controls on free speech are considered.

The limits of free speech

Article 19 of the International Covenant on Civil and Political Rights (ICCPR) makes clear that ‘everyone shall have the right to freedom of expression; this right shall include freedom to seek, receive and impart information and ideas of all kinds, regardless of frontiers.’ The last part of article 19 however, circumscribes this right by asserting that its exercise ‘carries with it special duties and responsibilities. It may therefore be subject to certain restrictions, but these shall only be such as are provided by law and are necessary [inter alia] for the protection of national security or of public order or of public health or morals.’

Article 20 of the Covenant goes further, containing a positive obligation upon States to prohibit, by law, ‘any advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence.’

These provisions may seem clear enough, however, they have been the source of longstanding debate and disagreement at the UN between those States (like the US) that have favoured a very high threshold for permissible prohibitions (asserting, essentially, that only ‘incitement to imminent violence’ can be prohibited in law), and others (especially members of the Organisation of Islamic Cooperation, OIC) who have advocated a far more interventionist position, pointing to the ability of even low-intensity ‘hate speech’ or incitement to pose a significant threat to the rights of minorities, and significant challenges to society. European States have traditionally aligned themselves more closely with the US position, though that stance has shifted significantly since the 20052006 ‘Danish cartoons’ controversy.

Shifting sands

The rise of social media has caused a seismic shift in these long-held positions.

Countries of the EU were the first to begin to understand that the speed with which disinformation and incitement to hatred or violence can spread in the digital age has changed the ‘rules of the game’ when it comes to determining what kinds of speech should be permitted and what should not. For example, in 2017 Germany enacted a new law, the Network Enforcement Act (NetzDG), to combat agitation and fake news. This inspired attempts to pass a similar law in France (the so-calledAvia law’). Beyond hard law remedies, the European Commission entered into talks with the social media giants to construct a framework for the rapid ‘take down’ of harmful posts.

 Until recently, the US Government has lagged far behind this evolving European position. It was the social media giants themselves (and in particular Twitter and Facebook), that began to move away from their earlier – and increasingly uncomfortable – insistence that they were simply a platform, and should not be held responsible for what individuals and organisations choose to post. Until the drastic steps taken against President Trump following the 6 January insurrection, their focus had been on increased monitoring (via algorithms and human intervention) and take down of dangerous disinformation (e.g., about the COVID-19 pandemic) or hate speech (e.g., targeting Black Lives Matter protesters), or ‘tagging’ offending posts with factual rebuttals (a new system Twitter used repeatedly to rebuff then-President Trump’s baseless allegations of election fraud). They also began to develop community standards for content moderation, and regularly release reports on their content moderation processes. Facebook even created an independent oversight board to arbitrate disputes over their decision-making on content moderation (especially important considering that context matters in determining whether and under what circumstances speech can harm – meaning mistakes can be made).

All this has led some conservative commentators to assert that the social media companies have violated former President Trump’s rights to freedom of opinion and freedom of speech. Surprisingly (considering they have been willing to go much further than the US in regulating speech online), earlier this month such commentators were joined by German Chancellor Angela Merkel and the EU’s Commissioner for the Internal Market Thierry Breton who stated that ‘the fact that a CEO can pull the plug on POTUS’s loudspeaker without any checks and balances is perplexing.(To be fair to Chancellor Merkel, she did position her comments in the context of a call for the US to take such matters out of the hands of CEOs by following Germany’s lead and passing stricter laws against hate speech).

While there is a debate to be had about proportionality (e.g., how long Trump should be banned from Twitter or Facebook), much of this recent (mainly conservative) commentary has been misguided, wrong and confusing.

First, there is no free speech argument in existence that suggests an incitement of lawlessness, hatred and violence is protected speech. Quite to the contrary. Nineteenth century free speech proponent John Stuart Mill argued the sole reason one’s liberty may be interfered with (including restrictions on free speech) is ‘self-protection’ in other words, to protect people from harm or violence. Indeed, as noted above, according to article 20 of the ICCPR, States are obliged to prohibit, by law, ‘any advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence.’

To suggest taking action against speech that incites violence is ‘censoring’ the speaker is completely misleading.

Second, there is no free speech argument that guarantees any citizen the right to express their views on a specific platform. It is ludicrous to suggest there is. If this ‘right’ were to exist, for example, it would mean any citizen could demand to have their opinions aired on the front page of the New York Times and, if refused, claim their free speech had been violated.

What does exist is a general right to express oneself in public discourse, relatively free from regulation, as long as one’s speech does not harm others. Donald Trump still possesses this right.  

Squaring the circle: public-private partnership

It is clear from the events of recent months that the effective regulation of hate speech and disinformation online, in a manner that fully respects and protects legitimate free expression, is both an incredibly important and incredibly complex challenge.

 While much ire has been directed at social media companies over recent years, they have at least taken steps to try to square this circle (no matter how imperfect people may believe those steps to be). They have also recognised the difficult questions that their policies and decisions pose for human rights, including freedom of expression, the right to freely elect one’s government, and security of person. For example, Twitter CEO Jack Dorsey, recently acknowledged that the decision to block President Trump ‘sets a precedent I feel is dangerous: the power an individual or corporation has over a part of the global public conversation.’

What is needed is for States to help fill the current void in direction and leadership by taking steps at national and international level to take such difficult and momentous decisions out of the sole hands of company CEOs. At a minimum, this means governments providing a clear policy framework (e.g., guidelines) within which social media companies can take content decisions in a manner that is fully consistent with human rights law. Or, as advocated by Chancellor Merkel, it could mean setting necessary guardrails in law – which would help ensure consistency of application across the social media sector, and promote compliance (through enforcement).

To date, much of the political discourse around this challenge has centred on using antitrust measures against social media giants, based on an assumption that greater competition would give people more choice over the kinds of online conversations they want to join, while also reducing the impact of being ‘de-platformed.’ The US Federal Government, for example, has already taken steps to challenge Big Tech in the courts (e.g., over Facebook’s ownership of Instagram and WhatsApp). 

Notwithstanding, there are signs that governments are willing to go further. In addition to the German and French laws mentioned above, further steps are on the cards in both Europe and the US. Regarding the former, the EU’s Digital Services Act would require big tech companies to do more to combat hate speech. In the US, meanwhile, politicians from both parties have argued for limiting the legal protections that online companies enjoy under Section 230 of the Communications Decency Act.

Another (and perhaps more effective) approach would be for governments and social media companies to work together through public-private partnership. It is clear that States cannot possibly ‘police’ billions of social media posts every day. Only the social media companies themselves can hope to do that (and even then, it is incredibly challenging). But what States can and should do is set the broad parameters (the above-mentioned ‘guardrails’) for action by social media companies.

Such a ‘norm-setting’ exercise should first be undertaken at UN-level through inter-State deliberations and consultations with digital technology companies. In the borderless digital world, it clearly makes no sense for States to develop a web of separate and possibly inconsistent rulebooks. States can then develop and implement national policies or laws consistent with those universal norms at national level – providing a clearer delineation of the rules that the platforms need to apply to policing speech, along with more consistent enforcement.

Featured photo: Jason Howie, Social Media apps on iPhone, on Flickr.

Share this Post