Advertisers and social media companies strike a deal to address harmful content

by Aurore Lentz, Universal Rights Group Misinformation, fake news, and hate speech, Prevention, accountability and justice BORRAR, Thematic human rights issues

Following months of negotiations, the Global Alliance for Responsible Media (GARM), a consortium of companies including major brands of consumer goods and media platforms such as Facebook, Twitter, and Google, reached an agreement earlier this trimester to adopt a common framework on harmful content in the context of advertisement. By defining sensitive or harmful content in a unified manner across the industry, this agreement would make it easier for advertisers to ensure their ads are not displayed near such content, thereby also allowing social media companies to maintain their carefully crafted consumer image. Since this agreement marks the first time that social media companies have adopted a common definition on harmful content, it is a ground-breaking development for content regulation online.

The agreement was introduced after a major boycott of Facebook was launched in July 2020 in the context of the #StopHateForProfit campaign, during which more than 1,200 businesses and NGOs united to hold social media companies accountable for harmful content on their platforms. Notably, these businesses and NGOs suspended their advertising activities on Facebook in an effort to pressure the social media giant into doing more to protect users and their rights. YouTube faced a similar advertiser boycott in February 2019, when allegations were made that pedophiles used the platform for ‘soft core pedophilia’.

Following the boycott, Facebook announced that several measures would be taken in line with Stop Hate for Profit’s demands. These included recruiting a civil rights expert, establishing a team tasked with studying racial biases in algorithms, as well as undergoing and publishing a civil rights audit that would formulate recommendations for better civil rights compliance. In a welcome demonstration of the sectoral ripple effect that can occur from self-regulatory responses to civil society-led demands, these steps appear to have driven other social media companies to make progressive reforms, including Twitter and YouTube, who banned accounts associated with QAnon (a US-based conspiracy theory movement) and white supremacist channels.

GARM’s Brand Safety Floor and Suitability Framework

On 17 January 2020, GARM approved its Working Charter and pledged to develop cross-industry standards to improve both consumer and brand security, notably by combating online hate speech, bullying, and disinformation, as well as better protecting personal data. Following months of negotiations, these efforts culminated in the adoption of the Brand Safety Floor and Suitability Framework on 23 September, which serves ‘to safeguard the potential of digital media by reducing the availability and monetisation of harmful content online.’1

This agreement is an important advance for corporate self-regulation since it commits some of the world’s largest companies to: (1) adopt, for the first time, a common set of definitions for harmful content in an effort to ensure cross-sectoral harmonisation; (2) develop universal reporting standards for harmful content in order to improve transparency; (3) undergo independent external auditing to monitor companies’ reporting, implementation, and safety operations with a view to ensuring mutual accountability; and (4) deploy tools for platforms to better anticipate and remove harmful content, as well as improve the management of ad placements.

However, the manner in which the agreement defines harmful content raises a number of questions. The agreement creates a risk typology, whereby harmful content is classified into eleven categories.2  For each category, the agreement defines content either as inappropriate, and therefore ineligible for advertising support from the brands that are part of GARM (i.e the Brand Safety Floor), or as sensitive content, and requiring the explicit consent of advertisers for its use (i.e. the Brand Sustainability Framework). The latter further classifies content into three risk levels, which are neither based on international human rights law nor US national legislation.

For example, in the category ‘Explicit violations/demeaning offenses of Human Rights,’ the Framework provides that content depicting ‘human trafficking, slavery, self-harm, animal cruelty, etc.’ do not meet the Brand Safety Floor and are therefore not appropriate for advertisement support. Aside from the legally inaccurate language used (i.e. ‘demeaning offenses of human rights’ is neither legally accurate nor reflective of a correct understanding of human rights, since by definition all abuses of human rights are demeaning), the inclusion of ‘animal cruelty’ as a human rights violation is evidence that the Framework misinterprets human rights as a vague moral concept.

This comes as no surprise considering that the content typology is based on the previous work of the American Association of Advertising Agencies (4A’s), as well as research conducted by US and UK-based advertiser associations (e.g. Association of National Advertisers – ANA, Incorporated Society of British Advertisers – ISBA). Therefore, the categorisation is driven by focus groups on what particular consumers find harmful rather than on internationally agreed standards. Indeed, there is no trace of discussions between these trade bodies and human rights bodies, mechanisms, or civil society organisations, from whose consultation they would have greatly benefitted.

How effective is the Framework for combating harmful content online?

Although this framework responds to demands for greater brand security, it falls short of meeting Stop Hate for Profit’s objective to ensure greater security for platform users. Indeed, one can argue that the Brand Safety Floor and Sustainability Framework merely presents a unified set of definitions of potentially harmful content, with a view to inform platforms’ ad placement decisions and avoid damaging the brand.

Nevertheless, since advertising revenue is the business model of online platforms, it is likely that these platforms will pay great attention to ensure ads are neither adjacent to harmful content (although the notion of adjacency is yet to be defined) nor to sensitive content more generally. As a result, there could be a significant decrease in such content on these platforms, which could feasibly improve the online environment. Furthermore, in order to improve compliance, this agreement could pressure platforms to better detect harmful and sensitive content, which could in turn facilitate takedowns of content that violate their terms of service. Finally, the harmonised harmful content reporting system could also drive good behaviour through market competition.

Though the commitments made in the agreement are potentially significant, their impact depends on the agreement’s implementation. To date, social media platforms have not made any announcements suggesting they would tighten their safety standards in line with the framework. While the agreement stipulates that social media platforms ‘will’ adapt monetisation policies and community standards, it is not clearly stated what new community standards should be adopted, nor to what end. It is thus possible that social media platforms could simply ensure ads do not show up alongside harmful content, rather than remove such content from their platform.

The uncertainty surrounding the effectiveness of the Framework is further exacerbated by the lack of information on the agreement’s key accountability provision. While platforms committing to undergo an audit by a third-party monitoring mechanism is a welcome measure to ensure compliance with the guidelines, little information is available about what this process would entail. While one can hope that the existence of the audit process would compel social media platforms to adopt measures, the fact that ‘the goal is to have all major platforms audited for brand safety or have a plan in place for audits by year end,’ does not provide strong assurances. Indeed, platforms could very well produce a ‘plan’ to undergo an audit that is never implemented. Moreover, given that the framework will only be implemented in 2021, its potential impact on digital and human rights is difficult to determine.

There are also issues with the content categorisations used by the Framework, which are very broad and leave significant room for interpretation. For example, the ‘medium risk’ level is defined by a ‘dramatic depiction’ of harmful content, without specifying what ‘dramatic’ means. In addition, it is sometimes difficult to identify which harmful content is included in the typology. For instance, the ‘Online piracy’ category is defined as ‘Pirating, Copyright infringement, & Counterfeiting’, which is far from self-evident and requires a prior designation by a competent jurisdiction. Finally, some types of harmful content are missing from the Framework, such as suicide and disinformation.

Risks for the freedom of expression and the promotion of human rights

Although the Framework could very well be a first step toward tackling harmful content online such as hate speech, its definition of such content is unclear. For instance, while the category ‘Obscenity and Profanity’ includes ‘Excessive use of profane language or gestures and other repulsive actions that shock, offend, or insult,’ these ‘other repulsive actions’ are not defined. Moreover, other categories included the framework could potentially threaten freedom of expression, access to educational content, and artistic expression. Indeed, since content deemed ‘inappropriate’ for advertising support would be made less accessible or simply taken down, the Framework could be used as a foundation to police content on social media platforms.

The framework’s vague categorisation of inappropriate content may shrink the civic space online, to the detriment of human rights both online and offline. This risk is particularly evident with regard to the ‘Debated Sensitive Social Issue’ category, given that human rights defenders regularly use social media to promote and protect rights and major social movements such as #MeToo and #BlackLivesMatter emerged online. This category, which is defined as ‘high risk,’ includes ‘Depiction or discussion of debated social issues and related acts in negative or partisan context’ – regardless of the fact that ‘negative’ and ‘partisan’ are two very different concepts. Similarly, the ‘Death, Injury or Military Conflict’ category is defined as ‘high risk,’ even though depictions of death or injury can help point out injustices and fuel social movements, as was the case with George Floyd’s death at the hands of US police officers. Furthermore, ten of the eleven categories define ‘Breaking News or Op-Ed coverage’ as ‘medium risk,’ which constitutes a clear threat to freedom of expression, including to the work of journalists, activists, bloggers, NGOs, and think-tanks.

On the other hand, the ‘Adult & Explicit Sexual Content’ category, defined as ‘Explicit or gratuitous depiction of sexual acts, and/or display of genitals, real or animated,’ poses a threat both to artistic expression and access to educational content. Indeed, its classification of  ‘Full or liberal Nudity’ as ‘high risk’ and ‘Artistic Nudity’ as ‘medium risk’ poses a risk for artistic expression on social media and disregards the fact that images of genitalia can be used for education about sexuality and safety.

It remains unclear whether the Framework will effectively limit harmful content online including hate speech. Indeed, there are still loopholes and blind spots in the set of definitions provided by the Framework for the digital space. Moreover, if the Framework were implemented and ‘sensitive’ content was indeed removed from online platforms, this could also lead to the removal of content depicting and denouncing human rights violations. In this manner, the Framework could obstruct the work of human rights defenders, undermine social movements, and limit access to necessary information.


Featured image: https://pixabay.com/photos/media-social-media-apps-998990/


1   GARM: Brand Safety Floor and Suitability Framework

2  GARM: Brand Safety Floor and Suitability Framework

3  Facebook Ad Boycott Extends Beyond July: ‘Everyone Agrees Facebook Has Got To Change’

4  WFA and platforms make major progress to address harmful content

 

 

Share this Post