The Financial Services Consumer Panel has published a paper looking at whether firms’ use of personal data is leading to bias and detriment for consumers with protected characteristics. It concludes that most experts assume that this is the case, and that this is because algorithms use historic data to make decisions on access to products and it is already known that people with certain characteristics are locked out of products, and because there is significant overlap between people having some protected characteristics being correlated with data that might be used to assess risk, such as postcodes or credit history. That said, it is hard actually to evidence that algorithmic decision making is the cause of bias because systems are opaque. As result there should be debates about where to draw the line in terms of ethical use of personal data to make risk-based decisions. The paper also concluded that despite the opacity there was clear evidence of ethnicity and disability bias, and says it is critical that regulation plays a critical role in addressing how firms use personal data, algorithms and AI and that firms must be required to show how their use of these tools do not cause bias.