Regulatory superstructure proposed for artificial intelligence
Isaac Asimov condensed his framework to regulate robots into three succinct laws. First, a robot may not injure a human or, through inaction, allow a human to come to harm. Secondly, a robot must obey orders from humans, except where they conflict with the first law. Thirdly, a robot must protect its own existence so long as that does not conflict with the first or second law.[1]
The proposals from the EU Commission are less digestible, running to nearly 100 pages and creating a regulatory superstructure for artificial intelligence. We consider the scope of this proposed Regulation on Artificial Intelligence and the specific new rules for high-risk use cases, “deepfakes” and public surveillance.
Please note, this article was updated on 26 April 2021 to reflect the final proposals from the EU Commission. The original version of this article considered the leaked version of the Regulation which was broadly similar but contained a number of differences.
Overview and summary
Crafting a suitable regulatory framework for new technology, such as artificial intelligence, is challenging.
Artificial intelligence is used in many different ways. It can be embedded within a product, provided as a service, or used internally within an organisation. The technology might be a general purpose tool or a model trained for a specific task. There are also a large number of potential actors in the deployment of artificial intelligence, including the persons supplying the software, providing the data, training the model, and selling and using the final system.
Artificial intelligence is also an emerging technology and there is little agreement as to what is, and is not, genuine artificial intelligence. While this technology has cracked some hard domain-specific problems (such as face recognition or language translation), it shows little sign of genuine intelligence or replicating the flexibility of the human mind. Skynet and HAL 2000 remain firmly in the realm of science fiction. Regulating this type of emerging technology risks constraining innovation and might just be unnecessary.
The EU Commission appears to have had these factors in mind. While the overall scope of the proposed Regulation is broad, the strongest obligations apply to a tightly defined class of “high-risk” artificial intelligence. Similarly, there has been a decision to focus the obligations on the “provider” of the artificial intelligence system, being the person placing it on the market or putting it into service, though there are also direct obligations for “users” of those systems.
What is AI?
The key definition setting the scope of the Regulation is of an “artificial intelligence system”. The Regulation recognises this is a "fast evolving family of technologies" and earlier leaked drafts of the Regulation acknowledged the difficulty in creating a clear and comprehensive description, given “there is no universally agreed definition of artificial intelligence”.
The solution is to define artificial intelligence by reference to three programming techniques, namely:
- machine learning;
- logic and knowledge-based approaches; and
- statistical approaches.
This avoids the problems with traditional high-level functional definitions, such as ‘systems that think like humans, act like humans, think rationally, or act rationally’, which are highly contested and more a question of philosophy than law.
However, this approach is still vague and potentially very broad. In particular, defining artificial intelligence to include “logic...based approaches, including...knowledge bases…and expert systems” potentially includes a very broad class of computer programs, not many of which could sensibly be called ‘intelligent’.
Unacceptable risk – The Zeroth law?
The broad definition of artificial intelligence is accompanied by a set of general prohibitions for unacceptable uses of this technology. The Regulation forbids the use of artificial intelligence systems that:
- use subliminal techniques to manipulate humans in a manner that causes or is likely to cause "physical or psychological harm";
- exploit information about a human to target their vulnerabilities or special circumstances and materially distort their behaviour in a manner that causes or is likely to cause “physical or psychological harm”;
- assess the trustworthiness of individuals where that use is by a public authority; and
- use real-time remote biometric authentication (e.g. CCTV with facial recognition) in public spaces for law enforcement save in limited cases.
The first two prohibitions are laudable and are difficult to argue with in principle, but suffer from a lack of clarity and require a lot of heavy lifting by the term “physical or psychological harm”. For example, what constitutes “psychological harm” and is it subject to any form of materiality qualification? Does it include doomscrolling, buying new gadgets you didn’t really need, body image issues, etc.?
However, in practice, given this is limited to subliminal techniques and exploitation of vulnerabilities, it is somewhat narrower than Asimov’s Zeroth Law: “A robot may not harm humanity, or, by inaction, allow humanity to come to harm”.
High risk – What is “high-risk” AI?
While the general prohibitions apply broadly, the majority of the Regulation focuses on “high-risk” artificial intelligence, which is much more tightly defined.
It includes situations that raise product safety risks. The use of artificial intelligence will be “high risk” where it is used as a safety product (or as the product itself) in equipment or systems such as machinery, lifts, medical devices, radio equipment and even toys. It also includes the use of artificial intelligence in the management of critical infrastructure such as roads or utility networks.
Use cases that endanger fundamental rights will also be “high risk”. The Regulation identifies "high risk" as the use of artificial intelligence for:
- remote biometric identification, e.g. CCTV-based facial recognition;
- recruitment and assessment of employees, educational selection and performance assessment, assessing creditworthiness and assessing eligibility for public benefits; and
- various criminal justice, migration, emergency service and judicial purposes.
Finally, the EU Commission has the opportunity to identify new forms of “high-risk” artificial intelligence, based on specific criteria. This approach of only applying strict regulation to a tightly defined class of “high-risk” uses, while allowing incremental expansion over time, appears sensible.
Who must comply with the rules on “high-risk” AI?
The Regulation applies to:
- “providers” of artificial intelligence systems. This means the person who places the system on the market under its own name or puts it into service under its own name. This is a very “product” based approach; the key to liability is whose name is on the system; and
- “users” of artificial intelligence systems who are subject to a lesser tier of obligations. However, importantly, where the user places their name on the system, changes its purpose or makes substantial modifications, the user will become a provider.
There are also obligations placed on the importers and distributors of these systems. The Regulation contains level playing field provisions which extend to providers or users in third countries where the output of the system is used in the EU.
What obligations apply to “high-risk” AI?
Significant and extensive compliance obligations are placed on providers of “high-risk” artificial intelligence systems. They include requirements to:
- use detailed and specific risk and quality management systems and subject the system to a conformity assessment;
- only use high-quality data that is “representative, free from errors and complete”. It is not clear how practical this is and whether it is possible to take compensating measures for any (likely inevitable) deficiencies in the data. Interestingly, there is an express right to use special category personal data to monitor bias;
- keep records and logs, and be transparent to users about the use and operation of the system. It is not clear how this obligation to maintain logs is to be reconciled with the data minimisation obligations of the GDPR;
- ensure human oversight by suitably trained and expert individuals. Some systems will need to have a ‘kill switch’ built in and/or require explicit human confirmation of decision making;
- ensure the accuracy, robustness and security of the system;
- conduct post-market monitoring of the operation of the system and notify any serious incident or malfunctioning to the relevant national regulator; and
- register the system on a public register.
These compliance obligations are detailed and burdensome. Providers of “high-risk” artificial intelligence systems will need to expend significant effort to comply with them.
Users of “high-risk” artificial intelligence systems will be subject to more limited obligations. They must use that technology in accordance with the instructions for use, monitor the operation of the system and keep logs of its use.
Limited risk – Deepfakes and biometric surveillance
In addition to the obligations placed on “high-risk” artificial intelligence systems, there are also specific obligations for other use cases.
In particular, where the system is used to create “deepfakes”, interact with humans or recognise emotions, the human must be informed.
Interaction with the GDPR and other regulatory regimes
The proposed Regulation overlaps with a number of other legal instruments, particularly the GDPR in relation to systems that process personal data and EU conformity laws in relation to products. This arguably makes the proposed Regulation unnecessary. However, there are some significant differences between the Regulation and the GDPR:
- the proposed Regulation acts as a form of lex specialis to the GDPR in the field of artificial intelligence imposing additional or more detailed obligations, such as the imposition of comprehensive risk management systems and specific controls on training data; and
- the scope of the proposed Regulation extends to “providers”, i.e. the producers of the relevant artificial technology. This contrasts with the GDPR which is strictly limited to those actually processing personal data.
Regulatory superstructure
This is all backed up by a new regulatory superstructure. Each Member State will need to appoint a national regulator and a new European Artificial Intelligence Board will be set up.
The sanctions for breach are also potentially significant. Infringement of the general prohibitions (described above) and data governance provisions will attract fines of up to 6% of annual turnover or €30 million, and breach of most other parts of the Regulation will attract fines of up to 4% of annual turnover or €20 million.
Accompanying these sanctioning powers are a broad range of investigatory tools including right to access source code and run test cases.
Next steps
The proposed Regulation on Artificial Intelligence has a long way to go. It will need to pass through the EU’s legislative machine and will then apply two years after it is adopted. This means these new obligations are unlikely to apply until 2024.
In the meantime, the use of artificial intelligence continues to raise new and interesting legal issues across a whole range of areas, including data protection, intellectual property, competition, financial regulation and product liability. Our AI Toolkit (here) provides detailed, practical tips on how to deploy this technology safely, ethically and lawfully.
The proposed Regulation on Artificial Intelligence is available here.
By Peter Church
[1] Asimov later added the Zeroth law to cater for situations in which robots have taken responsibility for governing.