Our UK and US teams attended a panel discussion in Copenhagen last week on AI’s legal risks and ethical adoption.
⚠️ We explored the three key legal risks presented by AI: regulatory, contractual and IP. In short, while businesses will need to grapple with these risks, we concluded that these risks are unlikely to be the best ‘starting point’ for ensuring good ethical and legal compliance.
🧱 Businesses looking to maximise their use of AI and ensure an ethical adoption of AI will instead need to look at their individual use cases and objectives. Using this knowledge, businesses can then build an internal framework consisting of guidance and parameters. This internal framework will then serve as the foundations for focussing on compliance with specific laws and regulations.
🐕 The question of whether AI is clever and capable of ‘inventing’ was debated….. AI can face challenges trying to differentiate a muffin from a chihuahua! We cannot forget that AI is only as clever as the data it is trained on, the searches it is able to run and the prompts we provide it. Against this backdrop, it seems non-sensical to argue there could be a ‘spark’ of creation by the AI itself, but this is an issue that is still being debated globally.
⚖️ The US IP decisions provided some vital ‘practical’ guidance on how to approach patent applications and the need for detail at every turn.
🌎 We also considered whether regulation hinders or enables the growth of new technologies. There were understandably mixed views from the audience, and we landed on ‘it depends’.
🤖 The EU AI Act adopts a risk-based approach, but potentially focusses too much on categorising specific technologies without focussing on the impact.
💡 This is precisely one of the reasons the UK government has cited for having not proceeded with new legislation yet, as the UK wants to adopt a pro-innovation approach, that protects businesses but focusses on the impact of the technologies, rather than just the types of tech used.
⚡ It’s an interesting conundrum but ultimately we’re already seeing UK and US businesses with global footprints starting to update existing risk-assessment processes to mirror the EU AI Act requirements, so the EU is already driving change globally.