Last week we held a well-attended seminar on AI and the Future of the Law. The event was hosted by Francesca Whitelaw KC, who with Jason Beer KC acted for South Wales Police in the Court of Appeal in the world’s first case on the lawfulness of automatic facial recognition, R (Bridges) v Chief Constable of South Wales Police.
The keynote speaker was Peter Jackson, the Chief Data and Technology Officer at Outra, who spoke engagingly and compellingly about the ethics of Artificial Intelligence, arguing that ethics, not law, has to be the first line of defence to misuse or abuse of AI, whether that relates to training data, deployment, risk or the use of what AI generates. As he put it,
“…just because you can, doesn’t mean you should.”
John Goss talked about the impact of AI on public sector decision-making and challenges to it, contrasting the role of the Information Commissioner and that of the Administrative Court, and suggesting that adaptation is likely to be required wherever the balance is struck.
Risk and AI – weighing up the benefits
Alex Ustych spoke on risk and regulation of AI, drawing on examples from the UK and other jurisdictions to weigh up the benefits (including to the legal sector) and costs. The latest research on the impact of AI on jobs suggests scope for significant improvements in productivity, particularly for ‘white collar’ workers. Could John Maynard Keynes’ 1930 prediction of a ‘15-hour week’ in the 21st century come true through AI? Set against this is other research suggesting that AI could lead to the loss of up to 300 million jobs worldwide. By contrast, the impact on healthcare (such as earlier cancer detection via scans and the accelerated discovery of medicines/vaccinations via ‘Pharma AI’) appears more clearly positive.
Uses of AI in the legal sector
In the legal sector, the use of general-purpose AI will remain risky until the AI tendency to ‘hallucinate’ (i.e. make up false but seemingly credible information or even case law) is addressed. But some uses are already being explored by firms: Fletchers built ‘Decision Support System’ to sift potential medical negligence claims. Allen & Overy integrated Harvey (based on GPT-4) for contract analysis, due diligence, litigation.
In the next 5 years, AI may be used to help address the perennial court backlogs, with straightforward Small Claims Track cases (to start) decided by AI. The court system may adopt a tiered fee structure depending on whether litigants wish to resolve more complex (Fast or Multi Track) dispute via AI (quicker and cheaper) or a human judge (slower and more expensive). The Master of the Rolls, speaking recently about the impact of AI, said that,
‘we will all have to get with the programme’.
However, safeguards will likely include always being told whether the decision had been taken by AI and having a ‘human’ route of appeal.
In terms of regulation, China (a significant AI market) was first out of the gate with focused AI regulation (with legislation expected to be in force this year). The EU’s legislative effort is more cumbersome (although there is now a political consensus about the terms of the EU AI Act in the European Parliament, it is some way off becoming law and unlikely to be fully in force until 2025) but provides a more comprehensive, risk-based approach. The AI Act’s requirements for compliant AI range from ‘minimal risk’ applications (such as spam filters) to ‘unacceptable risk’ applications (such as AI used for social scoring and manipulation). Following last minute changes to the Act, there will likely be separate requirements for general purpose AI (such as ChatGPT-4) around design, deployment, disclosure of any copyrighted material used and mitigation of reasonably foreseeable risks.
The UK’s approach to AI
By contrast, the UK Governments’ AI White Paper proposed a ‘light-touch’ approach to AI, by using existing regulators and laws (such as UK GDPR) rather than new legislation to provide a ‘proportionate’ response. The Information Commissioner welcomed the “ambitions to empower responsible innovation and sustainable economic growth.” In March 2023, the Information Commissioner’s Office updated “The Guidance on AI and Data Protection” focusing on fairness in AI and reflecting the Government’s ‘vision’. This approach may prioritise reducing red tape for businesses (and so draw AI business to the UK over Europe), rather than holding the line on privacy. This approach may run into two problems: (a) AI businesses, particularly startups, may comply with the EU AI Act in any event because they will want to be able to scale their products to Europe (b) if the EU feels that the UK’s approach is too laissez-faire, the EU’s adequacy decision on the UK data protection regime may be imperilled.
The only constant is change
The wide-ranging and thought-provoking talks from our event demonstrate the enormous potential AI has to change all elements of society. Inevitably, law – and lawyers – will have to change in response. 5 Essex Court’s specialist information law barristers continue to be at the cutting edge of this developing field.