First private sector ‘Treaty Body’ launched by Facebook

by Marc Limon, Executive Director of the Universal Rights Group Blog BORRAR, Blog BORRAR, Contemporary and emerging human rights issues BORRAR, Hate speech BORRAR, Misinformation, fake news, and hate speech, Prevention, accountability and justice BORRAR, Thematic human rights issues

Yesterday, beneath the radar of most diplomats at the UN, Facebook launched what is, in effect, a global first: a private sector-led human rights ‘Treaty Body’ designed to monitor its own compliance with international human rights standards. Specifically, the tech giant’s new ‘Oversight Board’ will review Facebook’s decisions about what content to ‘take down’ (because, for example, it constitutes ‘hate speech’ or online harassment, or endangers safety or privacy) and issue final, transparent, unappealable decisions. Commentators have long noted that Facebook is a ‘private sector entity [that seeks] the credibility and institutional legitimacy of a global public institution.’[1] It is therefore perhaps unsurprising that the company would be the first to challenge a status quo ante wherein human rights oversight has been the sole purview of public entities, whether they be national authorities or international mechanisms. Yet it is still a remarkable development, and potentially revolutionary in its implications.

Following a six month public consultation (itself modelled on ‘the standards and best practices of any public consultation undertaken by a public entity’) designed to help Facebook ‘establish institutional legitimacy in a governance initiative,’ it was concluded that the Board needed to exercise independent judgment, and to be diverse.[2]

On the first point, this meant that the Board’s judgements should not be influenced by Facebook, governments or third parties. To this end, Facebook established a $US 130 million trust fund to oversee funding for the Board, run its operations, employ staff and hold contracts with members — all completely separate from Facebook.

On the second point, the public consultations revealed the need for the Board’s composition ‘to reflect the diversity of the social media platform’s global users.’ Facebook therefore sought to identify ‘experts hailing from different backgrounds and disciplines, with a range of different viewpoints,’ people with different ‘perspectives, cultural and linguistic knowledge, professional experience’ and people of different ‘age, race and gender.’

On 6th May, the first 20 members of the Oversight Board were announced (the final Board will comprise 40 members – significantly larger than UN human rights Treaty Bodies). They come from different cultural and religious backgrounds, and include experts in freedom of expression, technology and democracy, the rule of law, journalism, child safety, and human rights protection. Their number include former UN Special Rapporteurs (Maina Kiai), former Inter-American Special Rapporteurs (Catalina Botero), former lawyers in the US State Department (Evelyn Aswad), and heads of international NGOs (Julie Owono of the Paris-based Internet Sans Frontiers). All have publicly expressed their commitment to freedom of expression within the framework of international human rights norms, and have committed, like Treaty Body members, to independently take decisions based on those principles in order to hold Facebook to account, without regard to the economic, political or reputational interests of the company.

What will the new Board do?

Social media affects people’s lives in many ways, good and bad. Today, in the middle of a global health pandemic, social media has become a lifeline for many people, providing valuable information and helping families and communities stay connected. At the same time, however, social media can also be used ‘to spread speech that is hateful, harmful and deceitful.’[3]

How to deal with such expression has long been a source of debate and disagreement at the UN. For example, in the context of its resolutions on ‘Combatting religious intolerance’ and in meetings of the ‘Istanbul Process,’ States (especially Western and Muslim-majority States) have  long argued over whether, to what extent and how governments should seek to protect individuals from religiously-motivated hate speech (e.g. anti-Semitic or Islamaphobic expression), and the implications of this for freedom of speech (indeed, such debates actually go back decades, to the time of the Commission on Human Rights and its resolutions on ‘Defamation of religion’). The importance and urgency of such debates has been amplified over the past five years by the growing power and reach of social media. Today, as was seen with such tragic consequences in Christchurch in 2019, hateful content posted in one country can quickly spread around the world – leading to serious human rights violations in other regions and States.

This ‘globalisation of hate speech’ and the fact that content is shared via private sector social media platforms, has made the problem difficult for governments and international organisations to regulate. For their part, social media platforms have long claimed that they could not act because, in short, they merely provided platforms for users to exchange information and could not be held responsible for content posted by those users.

All this has started to change over the past twelve months. First, governments and international organisations (e.g. France, Germany, the European Commission) have begun to take steps to regulate and prevent the spread of hate speech by establishing public-private partnerships (with social media companies) premised on the rapid ‘take down’ of hateful or dangerous content. Second, social media companies themselves, led by Facebook, have begun to shift towards a more interventionist attitude to human rights concerns linked to their platforms, including hate speech, misinformation, harassment and ‘fake news.’

Many of these new policies and approaches were shared during 2019 at an inter-sessional meeting of the Istanbul Process organised by Denmark, the EU and the Universal Rights Group (URG) in Geneva, and later in the year at the seventh meeting of the Istanbul Process organised by the Netherlands and URG in The Hague. Those meetings revealed a central challenge for all the new policies and approaches: who should decide whether or not certain content constitutes ‘hate speech’ – and thus should be ‘taken down,’ and how to do this at speed – before content can spread around the world – while protecting legitimate free speech. A key conclusion from the debates in Geneva and The Hague was that social media are part of the problem but are also essential to the solution. Only by working with companies like Facebook and Twitter can States effectively protect individual rights-holders from harmful or hateful content shared online.

The launch of Facebook’s Oversight Board is the most recent, and perhaps the most important, example of a coalescing of opinion around this point. It reflects a recognition, on the part of Facebook, that it has a clear responsibility to prevent its platform being used to violate the rights of its users or others in society, as well as a recognition that although it has the power to take down harmful or hateful content, it would be dangerous for it to wield such power alone – without adequate checks and balances to protect freedom of expression.

A private sector ‘Treaty Body’?

The new Board will oversee Facebook’s compliance with international human rights norms in a number of particularly challenging areas for the company and for the wider social media sector, including hate speech, harassment, and the protection of individual safety and privacy. In further echoes of the prerogatives and role of UN Treaty Bodies, the Board has been created by Facebook but remains independent of it (Treaty Bodies are established by States but work independently of them) – through the establishment of the trust fund; its members have been selected following an open public process of nomination, and have committed to work independently of Facebook; and members will serve fixed terms of three years, for up to a maximum of three terms (one more than is the case with Treaty Bodies) and cannot be removed by Facebook.

Indeed, in one way in particular, Facebook’s Oversight Board is more powerful than UN human rights Treaty Bodies: the decisions it reaches are final and unappealable, and must be fully implemented. Facebook’s Chief Executive, Mark Zuckerberg, has committed to carrying out the Board’s decisions and recommendations even when the company may disagree with them – unless doing so would violate the law. All such decisions and recommendations will be made public, and Facebook must respond publicly to them.

This power has led the outgoing Special Rapporteur on freedom of expression, David Kaye, to go further than comparing the new Board with a mere UN Treaty Body. In a recent article about Facebook and its new oversight body he said (somewhat provocatively but not without some justification): ‘Massive influence? Check. Legislative power? Check. Executive power? Check. Now one of them has a court.’[4]

To add a further layer of interest to this revolutionary moment for human rights, it may well be the case in the future that this private sector body will end up passing judgement (via its oversight of Facebook’s take down decisions) on the words of important State leaders. For example, during the current COVID-19 pandemic, Twitter has taken down tweets posted by President Bolsonaro because it decided they were contrary to public health guidance; while Facebook has taken down event pages promoting the anti-lockdown protests encouraged by President Trump because they violated social distancing rules.

How did we get here?

However one feels about this particular revolution (my view is that whatever the challenges thrown up, this moment was inevitable and, indeed, private sector leadership is the only way to protect individual rights-holders from online hate speech and ‘fake news’), the establishment of the new Oversight Board raises as many questions as it answers.

Just a few of these were posed in the above-mentioned article by David Kaye: how can just a few companies ‘come to so dominate public speech?;’ how can such companies cause ‘so much friction with governments and the public, and then, as Chinmayi Arun describes so well, [simply] create their own mechanisms of self-regulation?;’ ‘where is government oversight promoting and protecting democratic principles?;’ and ‘why should private companies be making, and then overseeing, the decisions that have such impact on public life?’

In short: ‘how did we get here?’

 


Featured image: creative commons found at pxhere.com


[1] https://www.weforum.org/agenda/2020/05/key-lessons-from-the-creation-of-facebook-s-new-oversight-board/

[2] https://www.weforum.org/agenda/2020/05/key-lessons-from-the-creation-of-facebook-s-new-oversight-board/

[3] https://www.nytimes.com/2020/05/06/opinion/facebook-oversight-board.html

[4] https://www.justsecurity.org/70035/the-republic-of-facebook/

Share this Post