‘The stakes couldn’t be higher’: social media, disinformation, and the survival of democracy

by Danica Damplo, Universal Rights Group NYC Blog BORRAR, Blog BORRAR, Contemporary and emerging human rights issues BORRAR, Democracy, Hate speech BORRAR, Prevention, accountability and justice BORRAR, Thematic human rights issues, Universal Rights Group NYC BORRAR

On 11 June United States Presidential candidate and former Vice President Joe Biden posted the following tweet:

He accused Facebook of failing to enact any real reforms to combat disinformation on its platform, with his campaign releasing an open letter for people to sign emphasising the role that disinformation – spread on Facebook – could have on the coming 2020 presidential election in November: ‘In 2016, we saw what happens when social media platforms allow disinformation to run rampant. It places the integrity of our elections at risk.’

Attacks on the integrity of democratic elections are not only a problem in countries in conflict or in nascent democracies. In established democracies like the UK and the US, the interplay of populism and technology, coming against a backdrop of outdated election laws and mechanisms, has led to a rise in misinformation, ‘fake news’ and hate speech, especially online, and often targeted at certain voter groups. These problems can become compounded and amplified through social media platforms like Facebook and Twitter, including during the COVID-19 pandemic.

So what are Facebook and Twitter doing today to combat disinformation and hate speech, and why is it so important, especially now, to get it right?

Twitter

In 2018 Twitter claimed that it had suspended millions of fake accounts to counter disinformation, and in March of this year vowed that they will use new tactics to ‘keep pace with disinformation and influence operations targeting the 2020 election.’ In February, Twitter introduced a label for media that may be fake or manipulated, and then in March developed a strategy to address tweets on COVID-19 that contradicted authoritative guidance and could pose a risk to public health. In early May Twitter announced that they will expand this strategy beyond COVID-19 to place labels and warnings on potentially harmful and misleading tweets.

As The New York Times reports, when Twitter placed fact checking notices on US President Donald Trump’s inaccurate tweets on voter fraud, the President condemned Twitter and rushed to sign an executive order making it easier for federal regulators to claim that companies are restricting free speech. Twitter, seemingly undeterred, recently placed a warning label on a widely condemned tweet by President Trump inciting violence, and which evoked for many Americans the brutal repression tactics used against African Americans during the 1960s civil rights movement:

On 12 June Twitter disclosed ‘networks of state-linked information operations’ that Twitter Safety has removed. This new information could be valuable in identifying and combatting State-led disinformation strategies. This concern about States interfering, through social media, with the social fabric and electoral independence of other States, has been of particular interest since the publication of the Mueller report on the ‘sweeping and systematic’ plot by the Russian government to interfere in the 2016 US elections.

While these are positive steps, Twitter has still faced criticism for not doing enough. In October 2019, then Presidential candidate US Senator Kamala Harris called for Trump to be banned from Twitter, an idea that has gained momentum in the wake of tweets  that many argue would have prompted the expulsion of an ordinary Twitter user. Twitter has nonetheless refused to ban Trump from the platform or remove his tweets completely.

Facebook

Facebook has come under intense pressure for failing to take any action in response to the same posts by Trump that Twitter flagged, when these posts have appeared on Facebook. According to The New York Times, Mark Zuckerberg has personally condemned the rhetoric used in the President’s posts, but stressed his company’s commitment to free speech, and that ‘the posts were different from those that threaten violence because they were about the use of ‘state force,’ which is currently allowed’ (though according to The Verge this a policy Facebook claims they will re-examine).

This is one month after Facebook appeared to make progress in terms of combatting hate speech and incitement on its platform. On 12 May, the Guardian reported that Facebook found ‘a sharp increase in the number of posts it removed for promoting violence and hate speech across its apps.’ Facebook reported having removed 4.7 million posts linked to hate groups in the first quarter of this year, up from 1.6 million in the previous quarter, as well as 9.6 million posts containing hate speech, compared with 5.7 million in the fourth quarter of 2019.

At the start of May Facebook also launched what is, in effect, a global first: a private sector-led human rights ‘Treaty Body’ designed to monitor its own compliance with international human rights standards. Specifically, the tech giant’s new ‘Oversight Board’ will review Facebook’s decisions about what content to ‘take down’ (because, for example, it constitutes ‘hate speech’ or online harassment, or endangers safety or privacy) and issue final, transparent, unappealable decisions.

Many are naturally asking, where is Facebook’s shiny new Oversight Board? According to a tweet from the Board at the end of May, it was ‘working hard to set the board up to begin operating later this year.’

COVID-19 results in an ‘infodemic’ – and a rise in social media usage

Back in May, URG Director Marc Limon took a closer look at what the World Health Organisation (WHO) termed an ‘infodemic’ of mostly false information about the COVID-19 pandemic and how this might affect policies and views on ‘fake news.’ On the subject of information about COVID-19, a poll by the Gallup/Knight Foundation found that when asked about the most damaging sources of ‘fake news,’ the majority of Americans identified social media (68%) and the Trump administration (54%) as the most important. 45% named mainstream national media as the first or second most common source of COVID-19 misinformation.

These patterns fell heavily along political lines, with Democrats citing the Trump administration as the most damaging source of ‘fake news,’ and Republicans citing the mainstream national news media. Furthermore, Mr. Limon has observed that ‘the reliance of populist politicians on ‘fake news’ – deliberate misinformation – has gone from being a useful (if immoral) tool in normal times, designed to distil complex political questions into simple ‘the People against the establishment’ narratives, to being a serious handicap during a global health emergency.’

While Twitter and Facebook have stepped up efforts to combat the spread of COVID-19 related disinformation, NPR noted that, ‘experts say the Internet has gotten only more flooded since 2016 with bad information,’ and further pointed out the inherent danger that, ‘four years after Russia’s expansive influence operation, which touched the feeds of more than 100 million users on Facebook alone, Americans’ usage of social media has only increased — and drastically so, as a result of the pandemic.’

What can be done?

While governments have a role to play in combatting disinformation, at present the US President and his administration, far from clearing up misinformation, appear to be intent on adding to it (see this compendium on coronavirus alone), while challenging the objectivity of mainstream media and journalists and using social media to communicate directly with supporters. Furthermore, this administration and the Republican party have generally sought to limit or block all government efforts to protect US elections from outside interference, including via disinformation.

In a democracy, in which citizens decide for themselves who best will represent them and their interests, information is critical. Mainstream media, social media platforms, governments, and other institutions, all play a role in how citizens are able to access that information. The manipulation of information will seriously affect the extent to which elections are free and fair, and also, how credibly they will be perceived. Democratic governments and institutions need to take this seriously, but so too do the private companies who have become a part of the media and information landscape. Former Vice-President Biden tweeted, ‘With less than 150 days until Election Day, the stakes couldn’t be higher. We’ve got to fix Facebook to protect our democracy and ensure fair elections.’

Mr. Limon has pointed out that the longstanding US position on freedom of expression has been that the best antidote to hate speech or false information is more information, with a high threshold given for what is considered incitement. But according to the Knight/Gallup poll, a policy of ‘more is more’ would appear to be out of touch with what Americans believe, as ‘relatively few (14%) think social media companies should leave the posts up without checking whether it contains misinformation.’

While Twitter, and at present and arguably to a lesser extent, Facebook, are taking steps, it is not clear that they meet the urgency of the moment. According to The Washington Post, Mr. Zuckerberg, and other top Facebook executives, spoke with civil rights leaders on 1 June, but the call only worsened tensions, with Color of Change President Rashad Robinson saying ‘What was clear coming out of that meeting is Mark has no real understanding of the history or current impact of voter suppression, racism or discrimination. He lives in a bubble, and he defended every decision that he’s made.’

COVID-19, and the protests against systemic and structural racism and police brutality that have emerged in all fifty states (and across the globe), have in many ways moved up the timeline for action by companies like Facebook and Twitter. It will be critical for social media companies, alongside and ideally in collaboration with NGOS, international organisations, States and other actors, to meaningfully acknowledge the role of these platforms in shaping perceptions and even decisions, and to develop workable solutions in a manner recognising the importance of freedom of expression while responding to the seriousness of the challenge to democracy posed by online hate speech and disinformation.


Featured Image: Originally appeared in ‘How social media has shaped the U.S Presidential Election‘ by Steve Blakeman.

Share this Post