FIN.

FCA conducts literature review on bias in supervised machine learning

The FCA has published a literature review on bias in supervised machine learning. It is the first of a series of planned research notes on bias in AI.

The research note found:

  • The main potential source of bias was data issues arising from past decision-making, historical practices of exclusion, and sampling issues.
  • Biases can also arise due to choices made during the AI modelling process itself, including:
    • what variables are included;
    • what specific statistical model is used; and
    • how humans choose to use and interpret predictive models.
  • Technical methods for identifying and mitigating such biases should be supplemented by careful consideration of context and human review processes. However, the FCA’s paper highlighted that technical mitigation strategies may affect model accuracy and have unintended consequences for model bias on other groups.

Laura Wiles