France’s watered-down anti-hate speech law enters into force

by the URG team Blog BORRAR, Blog BORRAR, Contemporary and emerging human rights issues BORRAR, Democracy, Hate speech BORRAR, In focus: democracy BORRAR, In Focus: Human rights and religion BORRAR, Prevention, accountability and justice BORRAR, Religion, Thematic human rights issues

On 1 July a new French law entered into force that aims to regulate online hate speech. Known as the ‘Avia law’ after Laetitia Avia, the parliamentarian who drafted the original bill, the final law was significantly watered down during its passage through the lower house of parliament and the Senate, following opposition from free speech activists. Then, in an important blow to President Macron’s hope to ensure better regulation of hate speech on-line, especially that spreading via social media sites, on 18 June the French Constitutional Court struck down the core provision of the law – i.e. that online platforms must remove hate speech from their platforms within 24 hours of notice/complaint. The Court found this provision to be a breach of the right to freedom of expression and opinion. The decision means the ‘Avia law,’ already watered-down, is now deprived of much of its substance.

The fate of the ‘Avia law’ is reflective of continued disagreements, both within and between UN member States, about how to deal with the very real problem of hate speech online. On the one hand, the victims of racist, anti-Semitic or Islamophobic expression, supported by member States of the Organisation of Islamic Cooperation (OIC) have long argued for greater control over speech that incites racial, religious or other types of hatred and/or violence. They note that, coupled with the power and reach of the Internet, such ‘hate speech’ can rapidly spread around the world – leading, in some cases, to terrible acts of violence such as that seen in 2019 in Christchurch, New Zealand. On the other hand, free speech activists and many Western States (especially the US) have urged caution, noting that a very high threshold should be met before free expression is limited, and that ‘the best antidote to hate speech is more speech.’

Generally speaking, over the past ten years the balance in this debate has shifted towards those calling for greater control over hate speech spread on-line. This is partly due to the mounting evidence of the serious societal damage done by on-line hate speech, and partly due to the rise of populist leaders like President Trump – who have shown that previous societal checks and balances in the context of hate-speech break down when senior representatives of the State actually engage in hate speech rather than speak out against it. As part of this shift, Germany has brought in a law to control hate speech online, the EU has begun a number of initiatives with social media companies to ‘take down’ hateful content, and companies such as Facebook and Twitter have taken steps to remove hate speech from their platforms before it can go viral. The ‘Avia law’ was part of this important movement, and thus the striking down of its core provision is significant.

Avia law

It was the core provision of the Avia Law, which sought to compel search engines and social media platforms to remove ‘manifestly illegal’ hateful content within 24h of it being notified to them, at the risk of incurring steep fines as high as 4% of the company’s global revenue, that the ‘sages’ of the Court took issue with. They found that the ‘extremely short delay’ with which companies had to make ‘very technical juridical decisions’ on content removal risked violating individuals’ freedom of expression in a manner that was ‘not necessary, adapted or proportionate’, notably by incentivising companies to overly remove content. This decision goes in the same direction as the recommendation made by France’s NHRI, which had further lamented the absence from the legislation of preventative and educational measures, as well as a request from the European Commission to postpone adoption of the law for being contrary to European Law, notably the free circulation of services.

The Avia law was largely based on its German counterpart, the NetzDG law, which was equally contested but ultimately adopted in January 2018. Similarly, it aimed to compel online platforms to take a greater part of responsibility in removing illegal hate speech by fining them up to €50 million for a failure to remove illegal content within 24h of it having been notified to them. It further requires companies to publish a report every six months detailing removed content and the reasons for having done so.

A study of the effects of this law following 1 year of its entry into force shows that the fears of its detractors that it would incentivise a cautionary approach leading to unnecessary takedowns are largely misplaced. Indeed, of the 992 039 messages on Facebook, YouTube and Twitter that were notified to them as possibly unlawful, only 166 072 (amounting to 17%) were taken down by the platform. Of the more than one thousand suits brought against platforms under the law, only one resulted in a fine to Facebook. The €2 million sanction was imposed for Facebook’s bad faith reporting of the number of notifications of instances of hate speech, which the Court found was due to the opaque notification process. Indeed, in 2018, Facebook reported having received 2752 complaints – a number dwarfed and delegitimised by Twitter’s report of 520 000 cases. While it is difficult to assess the effectiveness of such a law in increasing the removal of illegal speech, as it is conjecture whether platforms would have done so proactively under a softer self-regulatory approach, the resulting improved user-friendliness of Facebook’s notification process is indubitably a success for advocates of a more socially cohesive digital space.

So what of the watered down French law that failed to follow the path of its German sister law? Following the decision of the Constitutional Court, only minor provisions remain. These include the obligation, possibly inspired by the German Facebook ruling, for platforms to simplify their notification process. It further creates an independent observatory of online hate speech. Unfortunately, the preventative element of the law, which would have created a reporting obligation comparable to that of the German law, was also de facto censured for having been linked to the sanction regime. In effect, the new law will have very minor implications for addressing the scourge of online hate speech through a regulatory approach. The ball is back on platforms’ side of the court and it is now up to them to demonstrate they can address the issue through progressive self-regulation. Fortunately, there are promising developments in this regard.


Featured image: ‘Les membres du Conseil constitutionnel en mars 2019. En haut, de gauche à droite : M. François Pillet, M. Alain Juppé, M. Jacques Mézard, M. Michel Pinault. En bas, de gauche à droite : Mme Dominique Lottin, Mme Claire Bazy Malaurie, M. Laurent Fabius (Président), Mme Nicole Maestracci, Mme Corinne Luquiens.’ Available at: https://www.conseil-constitutionnel.fr/les-membres

Share this Post