Making AI trustworthy: the EU’s proposed legal framework for regulating artificial intelligence

by Courtney Halverson, URG NYC New and emerging technologies, Thematic human rights issues

Artificial Intelligence (AI) has a plethora of uses, spanning from surprising and beneficial applications, like applying the same technology used to analyse pastries to identify cancer cells, to potentially detrimental and intrusive applications, like using facial recognition to track citizens. The European Union’s new proposal for a legal framework to govern AI suggests that the introduction of ethical, human centered regulations can both protect human rights and privacy while also incentivising the technological development of AI.

As with the EU’s adoption of the General Data Protection Regulation (GPDR) in 2018, which created firm rules on data privacy and security, the EU is once more taking the lead to regulate emerging technologies. The AI legal framework is a 108-page document outlining a risk-based assessment of emerging and existing AI systems. It is likely to be debated until at least 2023 by the European Parliament before being voted on by EU member states. The proposal aims to create definitions and regulations that would be trustworthy and ‘future-proof’ as technology rapidly advances.

Defining AI systems by the risks posed to rights

To create the necessary regulatory framework, the EU has proposed sorting AI systems into four categories: minimal risk, limited risk, high-risk and unacceptable risk. The EU proposal contained the following ‘future-proof’ definition of Artificial Intelligence (AI); AI is software that is developed with one or more specified techniques and approaches (including machine learning and deep learning) that can, for a given set of human defined objectives, generate outputs such as content, predictions, recommendations or decisions influencing the environments they interact with.

AI technology that is considered a minimal risk will not be regulated, and therefore can be used freely. Minimal risk systems (which currently covers most AI systems) include things like AI-enabled video games and spam filters for email. AI systems in the limited risk category must be transparent about the system that people interact with. For example, users must be notified when they are interacting with a machine, so that they can make informed decisions.

High-risk systems are organised by sectors rather than specific examples. AI systems are considered high-risk if they are used in critical infrastructure like transportation, education and employment, essential private and public services, law enforcement, migration and the administration of justice and democratic processes, to give some examples. Generally, each of these sectors are categorised as high-risk because of the potential to violate people’s fundamental rights as protected by the Charter of the EU on human rights.

High-risk AI systems will be subject to a variety of strict rules before they can enter the market. Systems that are considered high risk will have to have adequate risk assessment and mitigation systems, high quality datasets to minimise discriminatory outcomes, activity logging abilities to ensure traceable results, detailed documentation for authorities to assess compliance of the system, clear information for the user, appropriate human oversight to minimise risk, and a high level of robustness, security and accuracy.

Lastly, unacceptable risks are those that are considered a proven threat to the safety, livelihoods and rights of people, and therefore will be banned. Unacceptable risks are considered too dangerous to operate in any way in society. For example, systems used to manipulate users’ free will and systems that allow ‘social scoring’ have been deemed unacceptable risks.

One case in focus: facial recognition

Facial recognition technology straddles multiple risk categories, depending on its application, in the proposed legal framework. Facial recognition technology is noted as high-risk due to the potential for discrimination, but social scoring, which uses facial recognition technology in real time, is categorised as an unacceptable risk due to numerous violations of fundamental rights. The proposed legal framework outlines that facial recognition technology, and specifically, real-time biometric identification, should never be allowed in public spaces except in cases of missing children, seeking to prevent terrorist threats, or locating and prosecuting perpetrators suspected of serious criminal offences.

The fears of real-time biometric identification stem from the idea of the surveillance state and the example of China’s system of social scoring, a ‘moral ranking system’ created to assess Chinese citizens, companies and government organizations. Currently, the system is voluntary and does not span across the entire State, but the plan entails each person having their own unique code to measure their social credit score in real time.

Throughout the process of drafting the proposal for the legal framework for AI, dozens of civil society groups and digital rights activists urged the European Commission to fully close the existing loopholes on facial recognition. The groups raised concerns that prohibiting real-time remote biometric identification systems only in publicly accessible spaces does not protect citizens from the use of such systems by private actors or the use of systems not found in public spaces by public authorities. Because of the narrow scope proposed on the limits of its use there is still an opportunity for the development of facial technology systems that violate fundamental rights.

The proposal clearly centered the EU Charter on human rights, citing the term ‘fundamental rights’ eighty times throughout the entire document. The risk categories were defined by the potential to violate any fundamental rights in the present or future. Yet, intercepting all of the potential human rights violations may be impossible as AI is known to recreate the unconscious and conscious biases that coders and programmers maintain. While computers continually grow smarter, this may change, but currently AI is only as good as the information it’s fed and the people who create it. While human rights seek to be protected in this proposal, some of the violations that occur could stem from the inequitable practices that humans continually construct.

The global implications

If the proposal is to pass, the EU regulations would affect both AI systems that are produced and/or implemented in the EU and AI systems that will have an effect on EU citizens no matter where the manufacturer is based. Therefore, the framework could also impact foreign tech companies seeking to operate within the EU. Fines for companies who do not comply could be up to 6 percent of a company’s global sales.

Despite playing host to many pioneering technology companies, at present the US is not expected to emulate these grand steps towards a legal framework for AI; rather, the Biden administration is expected to maintain the decentralised strategy the US has traditionally used to manage technological risks. Nonetheless, the Federal Trade Commission (FTC) recently cautioned against artificial intelligence systems that use racially biased algorithms to select candidates for employment, housing, insurance and other benefits. Additionally, certain states like Massachusetts and cities like Oakland and San Francisco, California and Portland, Oregon have already taken steps to minimise police use of facial recognition technology despite the fact that there are currently no national standards in place.

Yet, if the EU indeed becomes the first to tackle AI regulation it is likely that their policies will be used as guidance for other countries’ AI policies.  The Australian government passed a law to push Google and Facebook to pay publishers for their news. Britain is in the process of creating a tech regulator position to oversee the tech industry broadly. A reckoning is underway in how people interact with technology and social media, and the EU’s new AI legal framework is the first attempt to regulate one aspect of this ever-evolving space. It is possible that many other countries, or regional bodies such as the African Union or the Organization of American States will consider taking a similar approach as the EU, to regulate AI in such a way to both foster innovation and protect human rights of citizens.

Provisions for the regulation of AI are also included in the UN’s roadmap for digital cooperation, which notes that AI can be used to create digital public goods like predictive insights for crisis management. Yet, the roadmap also indicated challenges, such as a lack of inclusiveness and representation in global AI discussions, a lack of a common platform to govern AI, and a need to better understand how the AI could be used to support reaching the Sustainable Development Goals (SDGs). Hopefully, the EU’s first and definitive step towards regulation of AI, including the emphasis on building definitions and regulations on the basis of benefits or risk to human rights, will be a useful first framework for fostering trustworthy technological progress.


Featured Image: EU flag transposed with technological wiring Klaus Ohlenschläger | Getty Images

Share this Post