The battle for social media regulation: can international human rights bridge the governance gap in the digital space?

by Daniela Kyle, Universal Rights Group NYC New and emerging technologies, Thematic human rights issues

On 4 June, Facebook declared that former US President Donald Trump’s suspension from their service will last at least two years, following the implementation of new enforcement protocols. These protocols are expected to have long-term effects on the presiding guidelines for content moderation and account suspensions for public figures. On the same day, Nigeria announced a nationwide Twitter ban after the platform removed a Tweet authored by the Nigerian President Muhammadu Buhari. Nigeria claims that the Twitter ban is an attempt to control misinformation and the spread of hate speech. Human rights activists globally say this will only limit free speech and silence political opponents.

These two recent developments add to escalating tensions between Big Tech and governments around the world. While tools for regulation exist on both the private and governmental side, tension arises when deciding who makes the final call. Can human rights guidance aid complex decision-making by public and private actors?

Facebook bans Donald Trump for two years and changes policies on public figures

Facebook published a new guiding policy on how it plans to monitor and penalise public figures. Account suspensions would include clearly defined timeframes based on the severity of the violation of Facebook’s terms, and range from a 1-month suspension to a maximum of two years.

Facebook stated that ‘given the gravity of the circumstances that led to Trump’s suspension, we believe his actions constituted a severe violation of our rules which merit the highest penalty available under the new enforcement protocols. We are suspending his accounts for two years.’ Once reinstated, Trump will be subjected to a set of rapidly escalating sanctions if he were to commit further violations.

Facebook also decided to no longer exempt politicians from compliance with community standards. Before 4 June, posts by public figures would stay up even if the content violated the site’s terms and conditions. At the time Facebook argued that it was in the public’s interest not to interfere. However, following Trump’s involvement in inciting the Capitol attack through his social media presence, Facebook decided to revise this policy. All politicians’ accounts now must abide by all terms and conditions delineated by Facebook’s community standards, the same as any other user.

Twitter also suspended Trump’s account following the events of 6 January. Jack Dorsey, Twitter’s chief executive, originally resisted pressure to ban Trump’s account, arguing that the platform was a place where world leaders could speak even if their views were heinous. However, Twitter eventually argued that permanently suspending Trump’s account was in the platform and the public’s best interest. Social media platforms, therefore, are reversing course for how they monitor the accounts of public figures by increasing accountability, oversight, and application of their community standards.

These policy changes will have long-term implications for how politicians and global leaders engage with social media platforms.  ‘This change will result in speech by world leaders being subject to more scrutiny,’ said David Kaye, a law professor and former United Nations Special Rapporteur for freedom of opinion and expression. ‘It will be painful for leaders who aren’t used to the scrutiny, and it will also lead to tensions.’ In fact, Nanjala Nyabola, who has written extensively about technology in Africa, said that Twitter and Facebook’s decision to ban the US President has opened a new phase in what had been a ‘very fraught conversation’ internationally. Opponents of this enhanced scrutiny argue that these private actors are advancing a disproportionate power over online speech by censoring public figures. Ramifications from these rising tensions are already being observed in countries like Nigeria.

Nigeria bans Twitter indefinitely

On 2 June, Nigerian President Buhari released a tweet that threatened separatist efforts with “destruction and loss of lives” similar to the violence which occurred during the Nigerian Civil War. Twitter claimed that Buhari’s tweet violated its ‘abusive behavior’ policy and therefore removed the post and suspended his account for twelve hours. Soon after, the Nigerian government banned Twitter completely. According to the Nigerian government, Twitter was allowing ‘the spread of religious, racist, xenophobic and false messages’ that ‘could tear some countries apart.’ They claimed that therefore, the decision to ban Twitter was an attempt to stop the spread of misinformation which can lead to real-world violence.

However, the international community cannot ignore the fact that these ‘real-world violent consequences’ have included the mobilisation of Nigerian youth against oppressive agents of the government, including the #EndSARs anti-police brutality protests. Dr. Leena Koni Hoffmann, of the Africa Programme at Chatham House, said the government’s grouse with Twitter is also about the ‘democratising role of social media in mobilising and amplifying the voices of young Nigerians.’ Provided with this context, the move to ban Twitter in Nigeria brought widespread condemnation from human rights groups and international actors who say that this will prove deleterious to democracy, curtailing Nigeria’s rights to free speech, assembly, freedom of expression and access to information—fundamental human rights.

Trump also weighed in, though perhaps unsurprisingly in support of the Nigerian government’s decision, stating that ‘more countries should ban Twitter and Facebook for not allowing free and open speech—all voices should be heard. In the meantime, competitors will emerge and take hold. Who are they to dictate good and evil if they themselves are evil?’ According to Trump and Buhari, the decision of social media platforms to heavily regulate content can prove damaging to democracy and is a violation of human rights. Trump alluded to the rise of potential digital platform competitors with less regulation which he claims will allow for ‘freer’ speech and greater levels of representative democracy. However, the future of these platforms in countries cracking down on social media such as Nigeria are still unclear.

Solutions for the future of digital moderation 

The recent policy changes enacted by social media platforms to better regulate the content produced by public figures show a stronger push by these companies to tackle the incitement on their platforms which has so often led to physical violence offline. Yet the tools for user regulation and content moderation remain in flux and controversial. For most social media platforms, there lacks transparency for the standards used to guide rules of regulation. Therefore, social media companies independently set their own standards and use their own discretion on how to enact them for content regulation, with little to no appeals mechanisms as part of this process. Government regulation has been proposed as a way to address this lack of accountability for social media companies, yet social media platforms and their moderation tools can also be bent to political influence and manipulation with serious and dangerous results.

There exists a clear governance gap over the digital space, and often a communication gap between private and public actors when contemplating the ‘best’ way to regulate social media. Therefore, many digital experts point to existing international human rights standards to fill this gap and to provide an external source of standards insusceptible to private interests or political will. In addition to proving valuable in combating hate speech and incitement, applying human rights frameworks to content moderation could oblige companies to disclose more information about rules and enforcement and to provide more transparent and effective remedies to users. This can also help dispel claims about politicisation of the platforms by disgruntled political figures.

Given the rudderless nature of present content moderation tools, and the inevitable political and private contestation over digital space, human rights frameworks can provide guidance on finding a balance between protecting freedom of expression while simultaneously enforcing the boundaries of freedom of expression to protect others from violence, hate, and harassment. There exist frameworks and treaties for which States have obligations under international law, which State governments will recognise and from which social media companies can draw standards to inform their content moderation tools and decision-making processes. For example, business enterprises are expected to respect and comply with human rights standards such as the frameworks presented in the  2011 UN Guiding Principles on Business and Human Rights, including for transparency, due diligence, and remediation. Applying international human rights standards is long overdue and is crucial to ensuring that companies are held accountable for their human rights impact, and can enhance the transparent charting of the digital space by the private and public actors that have a responsibility to the lives affected by it.


Featured photo: By @dole777, available on unsplash

Share this Post