FIN.

BoE and FCA report on AI in financial services

The BoE and FCA have published a report on their third survey of AI and machine learning in UK financial services.

Use and adoption

The regulators found that 75% of firms already use AI, with a further 10% planning to do so in the next 3 years. This is a jump from the 2022 figures of 58% and 14% respectively.

Foundation models formed 17% of all AI use cases.

Third-party exposure

Approximately 1/3 of use cases were third-party implementations, up from 17% in 2022. The regulators highlighted this as an indication of third-party exposure continuing to rise as the complexity of models increases, and outsourcing costs decrease.

The top 3 third-party providers accounted for 73%, 44% and 33% of all reported cloud, model, and data providers respectively.

Automated decision-making

Around 55% of use cases had some degree of automated decision-making, with 24% being semi- autonomous i.e. designed to involve some level over human oversight for critical or ambiguous decisions. Only 2% of use cases had fully autonomous decision-making.

Materiality

62% of use cases were rated low materiality by firms using them. Conversely, 16% were given a high materiality rating.

Understanding of AI systems

46% of respondents reported only having ‘partial understanding’ of the AI technologies they used, versus 34% of respondents who said they had a ‘complete understanding’.

This was mainly due to the use of third-party models, where firms acknowledged a lack of complete familiarity compared to internally-developed models.

Benefits and risks of AI

The largest perceived current benefits of AI were:

  • Data and analytical insights
  • Anti-money laundering and combatting fraud
  • Cybersecurity

The areas with the largest expected increase in benefits in the next 3 years were operational efficiency, productivity and cost base, which was largely in line with the 2022 findings.

The largest perceived current risks of AI all related to data:

  • Data privacy and protection
  • Data quality
  • Data security
  • Data bias and representativeness

The risks that were expected to increase most in the next 3 years were third-party dependencies, model complexity, and embedded / ‘hidden’ models. The increase in the average perceived benefit over the next 3 years (21%) was greater than for the average perceived risk (9%).

Cybersecurity was rated the highest perceived systemic risk currently and in 3 years, but the largest increase in systemic risk over the same period was expected from critical third-parties.

Constraints

The largest perceived regulatory constraint to AI use was data protection and privacy, followed by resilience, cybersecurity and third-party rules, and the Consumer Duty. For non-regulatory constraints, this was the safety, security and robustness of AI models, followed by insufficient talent and access to skills.

Governance and accountability

84% of respondents had an accountable person for their AI framework. They used a variety of governance frameworks, controls and/or processes specific to AI use cases, with over half of firms reporting having nine or more such governance components.

While 72% of firms said their executive leadership was accountable for AI use cases, accountability was often split, with most firms reporting three or more accountable persons or bodies.

Laura Wiles