FIN.

FCA looks at bias in natural language processing

The FCA has published a research note that looks at bias in the specific context of word embeddings. Word embeddings are used by many natural language processing applications and as an alternative to large language models, and work by creating mathematical representations of words that capture their meanings and relationships to other words or phrases. The FCA looked at the risk of word embeddings encoding harmful biases against demographic groups which could cause tangible harm is used in consumer-facing applications.  There are so far been no consensus on how to tackle the problem so the FCA has been looking at how these biases could be identified and removed at source. However, its research shows that there are significant limitations in what could be done using current methods.

Emma Radmore