The Human Rights Council and the wider UN human rights system have regularly considered the human rights implications of new technologies (e.g. resolution 20/08 on the ‘Promotion, protection and enjoyment of human rights on the Internet’). Over recent years, that interest has intensified.
The most recent Council text on the subject – resolution 41/11 on ‘New and emerging digital technologies and human rights,’ adopted in June 2019, pursued three objectives: (1) look at the positive as well as the negative implications of technologies for human rights – i.e. how these technologies can be used to promote and protect human rights, as well as potentially harming them; (2) adopt a holistic approach by looking at a broad range of new technologies; and (3) promote a multi-stakeholder approach involving ‘governments, the private sector, international organisations, civil society, the technical and academic communities.’
The pursuit of these three broad objectives meant the resolution covered a great deal of ground – including the positive and negative implications of digital technology for equality and non-discrimination, and the importance of putting such technology ‘at the service’ of economic, social and cultural rights. For example, in resolution 41/11 the Council recognises: ‘that digital technologies have the potential to facilitate efforts to accelerate human progress, to promote and protect human rights and fundamental freedoms, to bridge digital divides, to support, inter alia, the enjoyment of the rights of persons with disabilities, the advancement of gender equality and the empowerment of all women and girls, and to ensure that no one is left behind in the achievement of the Sustainable Development Goals.’
The High Commissioner for Human Rights, Michelle Bachelet, has also spoken of the ‘enormous’ benefits of digital technology ‘for human rights and development.’ In a speech to the Asian Society in New York in late 2019, she outlined some of those benefits: ‘we can connect and communicate around the globe as never before; we can empower, inform and investigate; we can use encrypted communications, satellite imagery and data streams to directly defend and promote human rights; and we can even use artificial intelligence to predict and head off human rights violations.’ However, she also – in line with 41/11 and the Secretary-General’s many interventions on this subject – warned that digital technology may, either accidentally or deliberately, also be used to undermine or violate human rights: ‘The digital revolution is a major global human rights issue,’ she said, ‘its unquestionable benefits do not cancel out its unmistakable risks.’
This last point – that the negative impacts of digital technology on human rights can occur unintentionally – is an important one. Indeed, most technology-related human rights abuses probably fall into this category. As Bachelet noted in her speech, these abuses ‘are not the result of a desire to control or manipulate, but [are rather] by-products of a legitimate drive for efficiency and progress.’ For example, algorithms designed to make social security systems more efficient (and therefore support economic and social rights) may end up exacerbating inequalities. Or, ‘digital systems and artificial intelligence create centres of power, and unregulated centres of power always pose risks – including to human rights.’
We already know what some of these risks look like in practice: recruitment programmes that systematically downgrade women; systems that classify black suspects as more likely to reoffend; or predictive policing programs that lead to over-policing in poor or minority-populated areas. The people most heavily impacted are likely to be at the margins of society. Only a human rights approach that views people as individual holders of rights, empowers them and creates a legal and institutional environment to enforce their rights and to seek redress for any violations and abuses of rights, can adequately address these challenges.
‘To respect these rights in our rapidly evolving world,’ concluded the High Commissioner, ‘we must ensure that the digital revolution is serving the people, and not the other way around. We must ensure that every machine-driven process or artificial intelligence system complies with cornerstone principles such as transparency, fairness, accountability, oversight and redress.’
Putting technology at the service of economic and social rights, and the SDGs
One of the ways in which digital technology is supposedly being mobilised to support human rights is through the ‘digitalisation’ of social security systems. This example also provides an instructive case study as to how such schemes, albeit conceived to improve efficiency and cost-effectiveness, may result in the violation of rights and the diminishing of human dignity.
At the heart of this case study lies a simple set of questions. Can machine learning replace the experience, intuition and judgement of human beings at the point of delivery? Can artificial intelligence effectively and compassionately judge which families need what kind of help most urgently? Can, in short, algorithms be relied upon to respect, promote and protect human rights without discrimination?
To help answer these questions, in September 2018 the Guardian newspaper surveyed a range of local councils (as social service providers) in the UK, which were each pioneering new ‘predictive analytics’ systems to identify families and children in need of interventions to prevent child abuse. As well as raising data privacy concerns, the investigation heard that the new systems ‘inevitably incorporate the biases of their designers, and risk perpetuating stereotyping and discrimination while effectively operating without any public scrutiny.’
These concerns were echoed a year later in an article by Ed Pilkington entitled ‘Digital dystopia: how algorithms punish the poor.’ The article focused on a quiet ‘revolution’ around the world in ‘how governments treat the poor.’ ‘Only mathematicians and computer scientists fully understand this sea-change,’ he said, ‘powered as it is by artificial intelligence, predictive algorithms, risk modelling and biometrics.’ And yet, ‘if you are one of the millions of vulnerable people at the receiving end of this radical reshaping’ of the manner in which States promote and protect economic and social rights, ‘you know it is real and that its consequences can be serious – even deadly.’
The article explained how access to unemployment benefits, child support, housing and food subsidies, and much more, is being digitised and automated. ‘Vast sums are being spent by governments across the industrialised and developing worlds on automating poverty and, in the process, turning the needs of vulnerable citizens into numbers, replacing the judgment of human caseworkers with the cold, bloodless decision-making of machines.’ The American political scientist Virginia Eubanks has called this the creation of a ‘digital poorhouse.’
In Illinois (US), for example, algorithms have been used to recalculate welfare payments. Those who have received too much (in some cases, across periods of more than 30 years), have been automatically instructed to pay it back. Similar cases are reported in Australia, where vulnerable and marginalised individuals have been ordered to pay back social security benefits because of a ‘flawed algorithm.’ In Newcastle in the UK, ‘where millions of pounds are being spent developing a new generation of welfare robots to replace humans,’ claimants have spoken of a climate of ‘fear’ and ‘panic’ as social security benefits are changed without warning, without explanation and without remedy. These three examples alone are said to have affected millions of people – with the poorest and most vulnerable paying the highest price. Similarly, in India, technical problems (e.g. failure to recognise people’s thumbprints) with the country’s ‘Aadhaar’ system, a 12-digit unique identification number linked with people’s biometric data, have resulted, in some cases, in destitution, starvation and suicide.
In each of these cases, digital technology solutions (affecting social services, unemployment benefits, disability allowances and health coverage) are often being rolled out with minimal public consultation and minimal parliamentary debate.
Opening the eyes of world governments
These serious threats and challenges to economic and social rights, and to the equality and non-discrimination principles that underpin international human rights law, are belatedly being considered at the UN. In late 2019, Philip Alston, the-then UN Special Rapporteur on extreme poverty, presented his final report to the UN General Assembly. In it, he warned that the world is ‘stumbling zombie-like into a digital welfare dystopia.’ ‘All too often’ he said, ‘the real motives’ behind the digitalisation of the welfare state, ‘are to slash spending, set up intrusive government surveillance systems, and generate profits for private corporate interests.’
‘The process is commonly referred to as ‘digital transformation’ by governments and the tech consultancies that advise them,’ he said, ‘but this somewhat neutral term should not be permitted to conceal the revolutionary, politically-driven, character of many such innovations.’ ‘Systems of social protection and assistance are increasingly driven by digital data and technologies that are used for diverse purposes, including to automate, predict, identify, surveil, detect, target and punish.’
The international community ‘has thus far done a very poor job of persuading industry, government, or seemingly society at large, of the fact that a technologically-driven future will be disastrous if it is not guided by respect for human rights and grounded in hard law.’
As the world faces severe economic retractions, increased unemployment and growing poverty as a result of the COVID-19 pandemic, the question of whether the international community, and most importantly national governments, are capable of doing a better job in the future is one – quite literally – of life and death.
Share this Post