On 14 May this year, the San Francisco city council voted to ban the use of facial recognition technology by local authorities and agencies, including the police. Several other US cities, and even some States, are now considering following suit. These important developments come in the wake of the release of a recent study by Georgetown University, which found that the use of facial recognition technology by the FBI and local law enforcement agencies serves to entrench societal biases, largely as a result of the technology’s tendency to misidentify anyone who isn’t a young white male (upon whom the algorithms are trained). For example, in August there was a major outcry in the majority-black city of Detroit when it was revealed that the police had been secretly using flawed facial recognition technology for two years. One resident labelled it ‘techno-racism.’
In another example, 2016 research published in ProPublica on the use of algorithms by law enforcement agencies to predict recidivism (i.e. the likelihood of someone reoffending) found that black defendants were far more likely than white defendants to be incorrectly judged to be at a higher risk of recidivism, while white defendants were more likely than black defendants to be incorrectly flagged as low risk. Moreover, the use of AI technology does not only have negative consequences for racial discrimination, but also for other kinds of discrimination. For example, it recently came to light that Amazon’s AI recruitment tool is biased against women.
On the other hand, newspapers are also replete with stories of how AI can help improve lives and protect human rights. For example, in 2018 the police in New Delhi trialled the use of facial recognition technology to reunite lost children with their families. The trial was a significant success – using facial recognition technology the police were able to identify (and later reunite) almost 3,000 missing children in just four days. In another example, in 2008 two Danish brothers launched the website REFUNITE to reunite refugees with their families. The technology has reunited mothers and sons, sisters and brothers, nephews and aunts.
AI and machine learning (ML) are set to contribute to progress in many other areas of life too, including: the discovery and development of new medicines; support for persons with disabilities through, for example, speech-to-text recognition and image recognition and description, and enhancing the efficiency of renewable energy sources.
So what do these seemingly contradictory sets of stories tell us about technology and human rights? Are AI, big data and other emerging technologies good or bad for society and for rights? The answer can, of course, be both. The key to ensuring that technology is used for good is for States to adopt regulatory frameworks that allow for the transparent, accountable and rights-respecting deployment (or non-deployment) of new technologies.
These emerging trends and questions have not gone unnoticed or unaddressed by the international human rights community. The relationship between new technologies and the enjoyment of human rights has become a key topic of debate and discussion at the UN Human Rights Council (Council) and across the wider UN human rights pillar. The issue of technology and human rights is a central theme in OHCHR’s new ‘2018-21 management plan,’ has been the focus of new initiatives at the Council during 2019, and was one of the main themes discussed at the 6th Glion Human Rights Dialogue (Glion VI) in May of this year.
As with developments in the ‘real world,’ the UN’s work in the field of technology and human rights tries to address the risks but also focus on the opportunities offered by digital technology.
Regarding the former, in her opening statement to the 41st session of the Council in June, the High Commissioner for Human Rights, Michelle Bachelet, drew attention to the need ‘to address the human rights challenges raised by digital technology, as it transforms almost all sectors of every economy and society.’ The High Commissioner focused, in particular on the actual and potential human rights implications of surveillance technology, spyware (which can be used to monitor political opponents), and State-sponsored cybercrime and warfare (which, she claimed, is causing a ‘digital arms race’). Council mechanisms are also increasingly engaged on this issue. In another example, in June the Special Rapporteur on freedom of expression released a report on the threat of digital technology being used to undermine democratic elections, through network shutdowns, DDoS attacks and/or pervasive digital disinformation and propaganda campaigns.
Regarding the latter, different human rights actors have also noted the power of digital technology to catalyse and reinforce progress. For example, the Independent Expert on the enjoyment of all human rights by older persons, has reported that robotics, artificial intelligence and assistive technologies offer significant avenues for the fulfilment of the rights to autonomy, self-determination, equality and non-discrimination, safety and physical integrity, movement, as well as (more generally) to a life of dignity. Others have noted how satellite imagery and machine learning can facilitate the identification and monitoring of situations of serious human rights violations, can facilitate grassroots mobilisation, can help increase democratic participation, and can help promote government transparency and accountability.
At the 41st session of the Council in June, Austria, Brazil, Denmark, Morocco, Republic of Korea and Singapore secured the adoption of a new resolution on ‘New and emerging digital technologies and human rights.’ The resolution is premised on the recognition, as noted above, that the Council’s engagement on this issue must both help States mitigate the possible negative human rights consequences of technology, and support the positive application of technology to promote rights. The resolution also adopted a holistic approach to the subject, i.e. it will address all new technologies.
The strategy set out in the resolution is two-fold: the UN will first seek to map the human rights implications of current and emerging digital technologies, and will then aim to develop a human rights-based approach (HRBA) to guide States in regulating the deployment of those technologies. The resolution and this strategy represent a promising indication that the Council is aware of its important role and responsibilities in this field, and understands how it should best deliver on that role. Quite simply, if digital technologies are to be rolled out around the world in a manner that respects and promotes human rights, and avoids undermining or violating those rights, then the Council must be centrally involved. States and other stakeholders must work together to understand the nature of the relationship between technology and human rights, elaborate the human rights normative framework as it pertains to technology, and then help States put in place regulatory regimes that ensure technology is deployed in a rights-based and rights-consistent manner.
Feature picture: Artificial Intelligence & AI & Machine Learning, by Mike MacKenzie, licensed under CC BY-ND 2.0
Share this Post