What role could GenAI play in how banks manage risk?
Artificial intelligence continues to challenge the way that banks think about their business. The buzz around generative AI, in particular, has opened up new conversations about how banks can further embrace this technology. As AI-specific rules and guidance emerge, the immediate priority for any bank adopting AI is ensuring it meets existing standards for financial services.
Opportunities for AI in banking
Like all businesses, banks are exploring how to use GenAI safely. Many banks already have a strong track record of adopting earlier forms of AI and machine learning. This provides a helpful launchpad for further development, but it should be acknowledged that different AI applications attract different risk levels and must be managed accordingly.
Broadly speaking, use cases for AI in banking have tended to support back-office functions. A 2022 survey by the Bank of England and Financial Conduct Authority found that inputting to anti-money laundering and know-your-customer processes was one of the most commonly cited critical use cases for AI and machine learning. Respondents were also likely to say that they used AI for risk-management purposes—for example, to help them predict expected cash flows or identify inappropriate account uses. Automated screening of payment transactions to spot fraud is now commonplace.
GenAI builds on more traditional forms of machine learning. One key difference is the ability to engage with AI using natural language and user-friendly interfaces. This allows more people across more areas of banks’ businesses to access the technology and engage with its underlying datasets without needing a grounding in computer science.
Several banks have restricted the usage of publicly available large language models (LLMs), such as OpenAI’s ChatGPT. As discussed below, this approach can easily be justified by important regulatory concerns, both around the data put into those models and the reliability of their output. Nonetheless, many banks are experimenting with their own versions of GenAI models for internal purposes.
Such an investment in GenAI would likely be billed as primarily an internal efficiency tool. For example, a souped-up internal search function could present front-office staff with information from the bank’s extensive suite of compliance policies. A better understanding of those policies could reduce demand on the bank’s second line of defence and, hopefully, improve compliance standards.
Those same documents may have been written with the help of AI. It is not hard to imagine GenAI tools becoming a crutch when drafting emails, presentations, meeting notes and much more. Compliance teams could task GenAI with suggesting policy updates in response to a regulatory change; the risk function could ask it to spot anomalous behaviour; and managers could request that it provide briefings on business data.
In some cases, the power to synthesise unstructured data could help a bank meet its regulatory obligations. For example, in the UK the FCA’s Consumer Duty sets an overarching requirement for firms to be more proactive in delivering good outcomes for retail customers. Firms and their senior management must monitor data to satisfy themselves that their customers’ outcomes are consistent with the Duty. AI tools, including potentially GenAI, could support this monitoring exercise.
Using GenAI in front-office or customer-facing roles is more ambitious. From generating personalised marketing content to enhanced customer support and even providing advice, AI tools could increasingly intermediate the customer experience. But caution is required. These potentially higher-impact use cases also come with higher regulatory risks.
Accommodating AI in banking regulation
Relying on GenAI is not without its challenges. Most prominently, how large language models can invent information, or “hallucinate”, calls into question their reliability as sources of information. Outputs can be inconsistent, even when inputs are the same. Its authoritative retrieval and presentation of information can lull users into trusting what it states without due scepticism.
When adopting AI, banks must be mindful of their regulatory obligations. Financial regulators in the UK have recently reiterated that their existing rulebooks already cover firms’ AI uses. Their rules do not usually mandate or prohibit specific technologies. But, as the Bank of England has pointed out, being “technology-agnostic” does not mean “technology-blind”. Bank supervisors are actively working to understand AI-specific risks and how they should issue guidance or take other actions to address potential harms.
In a 2023 white paper, the UK Government called on sectoral regulators to align their approaches with five principles for safe AI adoption. These emphasise safety, security, robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress. All five principles can be mapped against existing regulations maintained by the FCA and Bank of England.
Both regulators set high-level rules that can accommodate firms’ uses of AI. For example, UK banks must treat customers fairly and communicate with them clearly. This is relevant to how transparent firms are regarding how they apply AI in their businesses. Firms should tread carefully when the technology’s outputs could negatively affect customers—for example, when running credit checks.
Another example of a high-level requirement that can be applied to AI is the FCA’s Consumer Duty. This is a powerful tool for addressing AI’s risks to retail-banking customers. For example, in-scope firms must enable and support retail customers to pursue their financial objectives. They must also act in good faith, which involves fair and open dealings with retail customers. The FCA has warned that it does not want to see firms’ AI use embedding biases that could lead to worse outcomes for some groups of consumers.
More targeted regulations are also relevant. For example, banks must meet detailed requirements related to their systems and controls. These specify how they should manage operational risks. This means that banks must prepare for disruptions to their AI systems, especially when supporting important business services.
Individuals should also consider their regulatory responsibilities. For example, in the UK, regulators may hold senior managers to account if they fail to take reasonable steps to prevent a regulatory breach by their firm. To show that they have taken reasonable steps, senior managers will want to ensure that they understand the risks associated with any AI used within their areas of responsibility and are ready to provide evidence that adequate systems and controls are in place to manage those risks.
Incoming AI regulations
As well as complying with existing financial-services regulations, banks must monitor cross-sectoral standards for AI. Policymakers are starting to introduce AI-specific rules and guidance in several important jurisdictions for financial services. Among these, the EU’s recently finalised structure for regulating AI has attracted the most attention.
The EU Artificial Intelligence Act, which will start to apply in phases over the next two years, focuses on transparency, accountability and human oversight. The most onerous rules apply to specific high-risk use cases. The list of high-risk AI systems includes creditworthiness and credit scoring. Banks should note that some employment-related use cases, such as monitoring and evaluating employees, are also considered high risk. Rules will also apply to the use of GenAI.
Many of the obligations set by the EU’s AI Act echo existing standards under financial regulations. This includes ensuring robust governance arrangements and consistent lines of responsibility around AI systems, monitoring and managing third-party risks, and protecting customers from harm. This is consistent with other areas of the EU’s rulebook, including the incoming Digital Operational Resilience Act (DORA), which raises expectations for how banks and other financial entities in the EU should manage IT risks.
Taking a risk-based approach
Banks’ extensive risk and compliance processes mean they are well placed to absorb this additional layer of regulation. The challenge for banks is to identify the gap between how their governance processes around AI operate today and what will be considered best practices in the future. Even though AI regulation clarifies expectations in some areas, regulators are unlikely to specify what is appropriate, fair or safe ahead of time. Banks should determine this for themselves and justify their decision-making in the process.
To the extent that they have not already started on this process, banks should set up an integrated compliance programme focused on AI. Ideally, this programme would provide consistency to the firm’s roll-out of AI while allowing sufficient flexibility to account for different businesses and use cases. It could also act as a centre of excellence or a hub for general AI-related matters.
An AI steering committee may help centralise this programme. An AI SteerCo’s responsibilities could include reviewing the bank’s business-line policy documents, governance and oversight structures and third-party risk-management framework. It could develop protocols for staff interacting with or developing AI tools. It could also look ahead to changes in technology, risk and regulation and anticipate how compliance arrangements may evolve as a result.
Banks have already started on their AI-compliance journeys. Ensuring they align with the current rulebook is the first step towards meeting the additional challenges of incoming AI regulations. A risk-based approach that identifies and manages potential harms to the bank, its customers and the wider financial system will be fit for the future.
This article was originally published in the spring 2024 edition of the International Banker.