FIN.

FCA researches explanations to consumers on AI outputs

As part of its AI research series, the FCA has published a research note exploring the effectiveness of different methods for explaining AI outputs to consumers in the context of determining consumer creditworthiness. The findings underscore the importance of testing materials provided to consumers when explaining AI, machine learning and / or algorithmic decision-making.

In its research, the FCA tested whether participants were able to identify errors caused either by incorrect data used by a credit scoring algorithm, or by flaws in the algorithm’s decision logic itself. It found that the method of explaining algorithm=assisted decisions significantly impacted participants’ ability to judge these decisions, but that the impact varied depending on the type of error.

For example, providing an overview of the data available to the algorithm impaired ability to identify errors in data input, but helped participants challenge errors in the algorithm’s decision logic, such as the failure to use a relevant piece of information about a consumer. It helped in this regard more than explanations focussed directly on the decision logic itself did.

The FCA suggests that this variance can be explained firstly by the fact that a larger volume of information may make it more difficult to identify errors, but also because additional information about the algorithm’s decision logic may encourage participants to focus on whether this logic was followed, rather than if the logic itself is sound.

Overall, the FCA found that providing additional information about the underlying workings of an algorithm was well-received, and gave consumers confidence to disagree with the algorithm’s decisions. However, additional information may not always aid decision-making, and may even lead to worse outcomes for consumers in impairing their ability to question errors.

Laura Wiles