What might AI regulation in the UK workplace look like?
As adoption of AI systems continues at speed, the constant flow of new products with new capabilities raises concern that regulation of AI, particularly for workers and jobseekers, is failing to keep pace. In its recent response to the AI white paper, the UK government confirmed that it has no current plans to introduce AI legislation. This approach contrasts starkly with the EU. The EU’s AI Act, approved earlier this year, will introduce a regulatory framework for the use of AI, which organisations within the EU will be expected to comply with from 2026.
To tackle the lack of targeted domestic protection for individuals subject to unfair or biased algorithmic decision-making, the Trades Union Congress has published a draft AI bill setting out a framework for the regulation of AI in UK workplaces. The AI Bill could prove influential.
The draft Artificial Intelligence (Regulation and Employment Rights) Bill
In April 2024, the TUC published the draft Artificial Intelligence (Regulation and Employment Rights) Bill. The AI Bill does not have government backing but, in identifying ways to ensure that people are protected from the risks and harms of AI-powered decision making in the workplace, it is intended to generate debate and offer solutions for the responsible adoption of AI.
The AI Bill is the work of an AI taskforce who together comprise expertise in the areas of technology, law and politics. The AI Bill proposes a raft of protections for workers and job-seekers, without seeking to be overly prescriptive. In this vein, it tackles only high-risk and prohibited uses of AI systems.
Similar to the EU AI Act, the aim of the AI Bill is to lay down rules for the fair and safe use of AI systems in the workplace by introducing a matrix of obligations on employers and a series of protections for workers. Below we set out the key provisions of the AI Bill.
Workplace AI risk assessments
Prior to implementing AI systems to be used for the purposes of high-risk decision-making activities, employers must undertake a workplace AI risk assessment. High-risk activities include decisions taken in relation to disciplinary matters, the termination of employment and capability assessments.
Among other things, the AI system must be assessed in relation to equality, data protection and human rights risks. Once implemented, further assessments of the system must be undertaken on an annual basis. These should take into account the extent to which the system gives rise to inaccurate outcomes. The AI Bill requires employers to consult with workers about the use of AI systems and the AI risk assessment will be a core part of that discussion.
Register of AI systems
All AI systems used by an employer in high-risk decision-making must be recorded on a register which should identify the categories of decision-making the system takes and the purpose and aim of the system. The register must be made available to workers and jobseekers.
The existence of the register, together with the duty to conduct an AI risk assessment, will impose tight controls and governance over the use of AI systems in employment decision-making.
Right of explanation and human reconsideration
The AI Bill gives workers and jobseekers the rights to seek a personalised explanation of any high-risk decision which is or might be to their detriment. The statement must explain how the decision affects the individual. It also gives workers and jobseekers the right to request a human reconsideration of any high-risk decision made about them by an AI system.
Discrimination in high-risk decision-making
One of the challenges presented by using AI in a workplace context is where liability for discrimination should attach when multiple parties have been involved in the design, development, training, testing and use of an AI tool. The AI Bill proposes a solution to this by introducing a reverse burden of proof, requiring employers to demonstrate that the output of an AI tool was not discriminatory.
The AI Bill also provides employers with a new statutory defence to discrimination claims where they can demonstrate that:
- They neither created nor modified the AI system;
- They took adequate steps to carefully audit AI systems prior to use; and
- There were procedural safeguards to remove the risk of discrimination and prevent the system from being used in a discriminatory way.
This defence is intended to promote thorough due diligence prior to the adoption of new AI decision-making systems.
Ban on emotion-recognition technology
The AI Bill would introduce a ban on emotion recognition technology where its use would be detrimental to a worker or jobseeker. Many such systems are already being used both as part of recruitment and as a way of assessing employee wellbeing in the workplace. However, emotion recognition technology is widely perceived as being intrusive and doubts have been raised as to how effective the systems are (including as a result of concerns that the systems perform less effectively for minority ethnic groups who are underrepresented in the datasets the systems are trained on).
The ban proposed by the AI Bill aligns with the approach adopted in the EU AI Act which prohibits the use of emotion inference systems in the workplace. Under the EU AI Act, failure to comply with the ban leads to a maximum fine of the higher of EUR 35 million and 7% of total worldwide annual turnover. The sanction proposed by the AI Bill is much less onerous with workers entitled to make a complaint to the employment tribunal and compensation to be calculated on a just and equitable basis.
Remedies for breach of worker rights
A number of the protections contained in the AI Bill are backed up by the right to bring a tribunal claim in the event of a breach and to seek compensation. In addition, the AI Bill introduces specific protection against unfair dismissal where the reason for the dismissal is unfair reliance on high-risk decision-making.
Where next?
While the AI Bill sets out a helpful blueprint for the regulation of AI in the workplace, it remains to be seen what impact it will have. More recent statements by the UK government suggest that its opposition to the regulation of AI may be softening, with an acknowledgement that some form of AI legislation will eventually be needed.
More definitive change may be on the horizon in the event of a change of government following a general election in the UK, which is expected later this year. The Labour party have given positive indications that they would adopt a proactive approach to addressing the impact of AI on worker rights.
As concerns over the safety of AI systems grow, tighter controls in the UK seem inevitable. The TUC’s AI Bill represents a ready-made framework which could form the basis for future legislation.