PRA and FCA have published a feedback statement following their and BoE’s discussion paper on AI and machine learning. The paper asked for views on the role supervisors should play in supporting the safe and responsible adoption of AI by financial services firms. In the feedback, the regulators:
- identifies themes from the responses;
- looks at the potential risks and benefits of use of AI in financial services;
- considers how the current regulatory framework applies to AI;
- outlines whether additional clarification of the existing regulatory requirements could be helpful; and
- looks at how policy can best support adoption of AI in a safe and sensible way.
Highlights from the responses to the DP include:
- that there is no need for a regulatory definition of AI and that it would not be helpful to have one. Risk or principles-based approaches to its definition by reference to the characteristics of AI would be more helpful to enable assessment of the risks it poses or amplifies;
- live regulatory guidance, updated to evolve with the technology, could be useful;
- it will be important to maintain industry engagement;
- coordination between national and international regulators is key given the complex and fragmented regulatory landscape – specifically in terms of data risks in relation to fairness, bias and management of protected characteristics;
- consumer outcomes should be a key focus of regulation, also with a focus on fairness and ethics;
- further guidance on the risks arising from increasing use of third party models and data is a concern – respondents noted the CTP regime;
- that firms need to be sure to involve all relevant units of the business;
- for banks, respondents thought the model risk management principles would cover AI model risk, but thought they could be strengthened or clarified; and
- that the SMCR and other existing governance structures are sufficient to address AI risks.