AI & the GDPR: Regulating the minds of machines
Artificial intelligence is the latest in a long wave of disruptive technology, offering significant benefits but also creating risks if deployed in an uncontrolled manner. In this article, we consider the interface between artificial intelligence and data protection law, and the extent to which the GDPR adequately regulates this exciting new technology.
What is AI?
The term “artificial intelligence” potentially covers a wide spectrum of technology but normally refers to systems that do not follow pre-programmed instructions and instead learn for themselves. This might either be using an existing data set, for example in supervised or unsupervised learning, or by prioritising actions that lead to the best outcomes, in the case of reinforcement learning.
One of the implications of this behaviour being learned, and not programmed, is that it may not be clear how the system makes a decision. It operates in a “black box”. The system might work perfectly in a development environment but become unpredictable or unreliable in the real world. Unlike a human, the algorithm has no higher-level assessment of whether what it is doing is obviously “wrong”. There is no “common sense” or “ethical” override.
This creates a number of legal concerns. The underlying algorithm might be making decisions that are biased or discriminatory and in breach of the broad fairness requirements of the GDPR.
Expanding the law to the minds of machines
The use of artificial intelligence to replace human decision making expands the scope data protection law.
This is because the GDPR does not regulate the minds of men. Human decisions cannot generally be challenged on the basis that they are unfair or unlawful under the GDPR, unless based on inaccurate or unlawfully processed data. For example, you cannot use the GDPR to ask that your exam is re-marked or your insurance coverage is not reassessed (assuming those decisions are taken by a human - see Nowak (C-434/16) and Johnson v Medical Defence Union [2007] EWCA Civ 262).
In contrast, decisions taken by machine are directly subject to the GDPR, including the core requirement for fairness and accountability. In other words, the data subject can challenge the substantive decision made by the machine on the grounds that it is not fair and lawful.
Given the accountability duties under the GDPR, defending such a claim will not only require you to ensure the machine’s decision-making process is fair but to demonstrate this is the case. This is likely to be challenging where that decision is taken in a “black box”, though the use of counterfactuals and other measures may help (see below).
Finally, there is a risk the system will make decisions that are either discriminatory or reflect biases in the underlying dataset. This is not just a potential breach of data protection law but might also breach the Equalities Act 2010 and raises broader ethical concerns.
“The computer says no”
There are further protections where automated decision making takes place – i.e. where an artificially intelligent system is solely responsible for a decision that has legal effects or significantly affect a data subject.
This reflects the common-sense expectation that important decisions, for example whether to offer someone a job or provide a mortgage, should not be entirely delegated to a machine.
Under the GDPR, this type of automated decision making can only take place in the following situations:
- Human involvement – If a human is involved in the decision-making process, it will not be a decision based solely on automated processing. However, that involvement would have to be meaningful and substantive. It must be more than just rubber-stamping the machine’s decision.
- Consent – Automated decision making is permitted where the individual has provided explicit consent. While this sounds like an attractive option, the GDPR places a very high threshold on consent and this will only be valid where the relevant decision-making process has been clearly explained and agreed to.
- Performance of contract – Automated decision making is also permitted where it is necessary for the performance of a contract or in order to enter into a contract. An example might be carrying out credit checks on a new customer or considering whether to offer someone a job.
- Authorised by law – Finally, automated decision making processing is permitted where it is authorised by law.
Even where automated decisions are permitted, you must put suitable safeguards in place to protect the individual’s interests. This means notifying the individual (see below) and giving them the right to a human evaluation of the decision and to contest the decision.
Transparency and “explainability”
The GDPR also requires you to tell individuals what information you hold about them and how it is being used. This means that if you are going to use artificial intelligence to process someone’s personal data, you normally need to tell them about it.
More importantly, where automated decision making takes place, there is a “right of explanation”. You must tell affected individuals of the fact of automated decision making, the significance of the automated decision making, and how the automated decision making operates.
The obligation is to provide “meaningful information about the logic involved”. This can be challenging if the algorithm is opaque. The logic used may not be easy to describe and might not even be understandable in the first place. These difficulties are recognised by regulators who do not expect organisations to provide a complex explanation of how the algorithm works or disclosure of the full algorithm itself (see the Guidelines on automated individual decision making and profiling, WP 251 rev 01).
However, you should provide as full a description about the data used in the decision-making process as possible, including matters such as the main factors considered when making the decision, the source of the information and its relevance.
Counterfactuals and other mitigations
None of these challenges necessarily prevents the use of artificial intelligence so long as it is used in a safe and controlled manner. Deployed properly with appropriate safeguards, artificial intelligence offers a number of potential benefits when it comes to decision making, such as reducing the error or unconscious biases that arise in human decision making.
The sorts of safeguards you might expect to see include:
- Counterfactuals - For example, if a loan application is rejected by an artificially intelligent system it could provide the applicant not just with that rejection but also with an assessment of the minimum change needed for the application to be successful (e.g. the loan would be granted if it were for £2,000 less or the borrower’s income £5,000 more). These counterfactual edge cases will provide some insight into the decision-making process.
- Verification – It may be possible to provide some form of human verification of the artificial intelligence’s decision-making process. For simple tasks, there might be easy ways to do this. For example, a picture classification algorithm might highlight the pixels that strongly influence the classification decision.
- Testing and output analysis – The system should be thoroughly tested on a robust dataset. The decisions of the system should also be analysed to ensure it is not making discriminatory or inappropriate decisions. For example, to confirm a system used to shortlist candidates for interview is not preferring female candidates over males (or vice versa).
- Ongoing sampling – A sample of outputs from the system should be reviewed on an ongoing basis to confirm the quality of its output, particularly where the system is used in a dynamic environment.
- Circuit breakers – It will usually be worth adding circuit breakers to the system so that if its outputs exceed certain limits, either a warning is triggered, or the system is suspended.
Similar controls are already required under MiFID II for financial services firms carrying out algorithmic trading and high-frequency trading.
Impact assessments
The interaction between artificial intelligence and the GDPR thus engages a number of relatively complex legal and technical issues that require a number of value judgements.
In most cases, you will need to document this evaluation. This will be through either a:
- Data protection impact assessment – These are mandatory if the processing is “high risk” and must involve your data protection officer (if appointed). If the assessment shows that there are unmitigated high risks, you must consult your data protection regulator before rolling out that system; or
- Legitimate interests assessment – If the processing is based on the so-called “legitimate interests test” the UK Information Commissioner will expect to see that assessment documented. This is a much quicker and more lightweight process than a full data protection impact assessment and can be recorded in a relatively short and informal document.
In many cases, the deployment of artificial intelligence systems will trigger the need for a full data protection impact assessment. EU guidance indicates that the use of new technology, automated decision making and similar activities will trigger the need for a data protection impact assessment (Guidelines on Data Protection Impact Assessment, WP 248 rev 01). In the UK, the Information Commissioner has issued a list of activities that prima facie will require a data protection impact assessment. It specifically refers to “Artificial intelligence, machine learning and deep learning” as a factor that may trigger the need for such an assessment.
Our AI Toolkit
These issues are all explored in greater depth in our AI Toolkit, along with related issues such as ownership, liability and financial services regulation.
Our AI Toolkit is available here.
By Peter Church