FIN.

Supreme Court Justice speaks on law and AI

Lord Hodge, Deputy President of the Supreme Court gave a speech at De Montfort University, Leicester entitled ‘Law and AI: Where are we going?’

In it, he first considered the risks in the use of ‘reputational data’, drawing parallels between China’s ‘surveillance state’, the mutual ratings systems used by apps such as Uber and the role of credit reference agencies in controlling access to credit such as mortgages. In particular, he highlighted the issues arising from false or flawed ‘input data’ such as has occurred on business and restaurant rating sites with these having been ‘gamed’ or deliberately misused. He then went on to state that generative AI, such as Chat GPT posed even greater risks.

The speech then addressed two themes:

  • the need to adapt the law to accommodate and regulate these new technologies; and
  • the opportunities these technologies present for improvements in legal practice and the justice system.

Adapting the law

In relation to the first, Lord Hodge initially considered the example of smart contracts. In particular, that by their nature no-one can prevent their performance once executed – even where evidence of fraud or misrepresentation is identified. He suggests that in such cases as these the law of unjust enrichment may provide a post-contractual remedy. However, how should contracts that have been prepared using machine learning be considered? What if, in effect, the two contracting parties are computers and the businesses or individuals involved have had limited involvement in the agreement of the terms of the contract. Should they, nevertheless, be bound by it?

As well as the contractual issues, Lord Hodge also identified a number of issues in tort law – where a computer has been used, how should intent be determined? How should liability for harm caused by autonomous machines be attributed? One possibility, he suggests is to give an AI system, like a corporation, legal personality and to impose an obligation of compulsory third party insurance against harm caused without fault. In his view, the ‘innovation first’ approach to AI taken by the UK government is valid, but will require appropriate “transparency and explainability” set by regulators together with “contestability and redress” by means of the regulatory system so as to be able to address such questions as those posed previously.

Another problem arises in relation to the ‘ownership’ of intellectual property generated autonomously by a machine. In the recent Court of Appeal case brought by a Mr Thaler (and now appealed to the Supreme Court which is considering its judgment) the court ruled that no patent could be issued for work generated by Mr Thaler’s machine as under the Patents Act 1977, only a natural person could be an inventor. The other obvious risk is that to data protection as machine learning accesses and makes use of personal data.

Despite these risks, Lord Hodge is of the view that the potential of digital assets to boost our economy is great. Therefore, it is important to continue to develop the common law and pass legislation which specifically recognises this new category of property. He does, however, highlight that in order for AI to be safely adopted and to avoid significant discrepancies and disputes an international approach is necessary.

Opportunities

Lord Hodge identifies as a key issue the ever increasing cost of legal ‘admin’ to clients such as document review and due diligence, areas ripe for disruption by AI. Fir example, English courts already permit the use of predictive coding software, a form of machine learning that takes data input by people about document relevance and then applies it to much larger document sets, for use in discovery.

He identifies the future risk that generative AI could, in future, replace lawyers. However, the more pressing issue at present is the impact on legal training and junior practitioners. He expresses particular uneasiness with tools such as ‘Lex Machina’ a legal analysis tool now owned by LexisNexis which seeks to predict the outcome of IP claims in advance.

Lord Hodge considers the application of automated adjudication systems (such as that employed by Ebay for disputes on its platform) as a means of handling low value easily categorised issues with a clear precedent. The idea being that then only complex or novel issues need proceed before the physical courts. However, Lord Hodge points out that there are considerable hurdles still to be overcome, not least the problem of ‘bias’ which has already been identified in existing predictive models and machine learning programs.

Lord Hodge ends by concluding that it will be incumbent upon law-makers to ensure that legislation, rules and regulation keep pace with the development of AI. It would not be possible to use the common law to develop an effective legal regime.

Duncan Scott