Select an area of expertise to find out more about our experience.
Find out more about our barristers and business support teams here.
As Artificial Intelligence rapidly becomes embedded in society, lawyers are seeking clear and practical advice about how to use AI tools efficiently in legal practice while guarding against professional, ethical and legal risk. The following recent guidance and case law may provide a useful starting point.
In October 2025, updated guidance was issued to judicial office holders and associated staff, replacing the previous April 2025 guidance. It also provides assistance to the legal profession more widely. The guidance articulates the overarching principle, that any use of AI must protect the integrity of the administration of justice and it emphasises the importance of personal responsibility for ensuring this. It helpfully contains a glossary of key AI-related terms and breaks down responsible use into several key principles, which may be summarised as follows:
a. Understand AI and Its Limitations
b. Confidentiality and Privacy
c. Accuracy and Accountability
d. Awareness of Use by Others
The guidance goes on to provide examples of potentially useful tasks that may be undertaken by AI tools such as summarising long texts, drafting presentations or administrative tasks; tasks not recommended such as legal research to find new, unverified information or deep legal analysis or reasoning; and red flags indicating AI use such as references to unfamiliar cases or odd citations.
On 25 November 2025, the Bar Council published its own updated guidance, Considerations when using ChatGPT and generative artificial intelligence software based on large language models, which is explicitly not “guidance” for the purposes of the BSB Handbook I6.4, but rather principles and warnings to reflect current professional expectations, particularly in light of recent High Court judgments on professional responsibility. The stated purpose of the guidance is ‘To provide barristers with a summary of considerations if using ChatGPT or any other generative AI software based on large language models (LLMs)’. As well as OPEN AI’s ChatGPT, it names Google’s Gemini, Perplexity, Harvey and Microsoft Copilot (also based on Open AI technology) as general examples and Lexis+ AI, Clio Duo and Thomson Reuters Co-Counsel as law-specific examples, while recognising these technologies and software are advancing rapidly and there is a need for professionals to be flexible and adaptable.
The Bar Council guidance begins not just by defining LLMs but it explains them: how they differ from traditional research tools and how they work. It highlights and explains ChatGPT specifically as it remains the most widely known LLM and also shares technology with Microsoft Copilot. This introduction sets up the focus of the guidance which is on the risks of LLMs – anthropomorphism; hallucinations; information disorder; bias in training data; mistakes and confidential training data; cyber security vulnerabilities. The guidance then translates these risks into the challenges for barristers – which are equally applicable to all lawyers using LLMs:
The Solicitors’ Regulation Authority has yet to issue equivalent specific AI guidance but on 1 October 2025 the Law Society published an article entitled, Generative IT: the essentials, which is intended to be a living document and which makes useful reading alongside both the above sets of guidance. Like the Bar Council guidance, it specifically considers the recent decision of the Divisional Court in R (on the application of Ayinde) v Haringey LBC [2025] EWHC 1383 (Admin).
Ayinde highlighted the key principles and risks associated with using generative AI tools such as ChatGPT in legal research and drafting and also explicitly demanded guidance for the legal profession:
Ayinde made clear that where a legal representative relies on false authorities due to unverified legal research, the Court’s decision in that case not to initiate contempt proceeding was not a precedent and referrals may also be made to professional regulators such as the Bar Standards Board (BSB) or Solicitors’ Regulation Authority.
In MS (Bangladesh) (Professional Conduct: AI Generated Documents), [2025] UKUT 00305 (IAC) the Upper Tribunal applied the Ayinde guidance and referred a barrister to the BSB after they cited a false case generated by ChatGPT, having failed to check the citation’s authenticity.
There have now been a series of judgments highlighting the dangers of reliance on AI tools for legal research without human checks but lawyers shouldn’t be deterred from using AI responsibly, as a useful tool. In Evans v Revenue and Customs Commissioners [2025] UKFTT 01112 (TC) a Judge of the First Tier Tribunal, Tax Chamber, concluded his Judgment by saying “I have used AI in the production of this decision”, referring to the previous April 2025 guidance to judicial office holders and describing it as a “tool”, “well-suited” to the application before him. He explained the way in which he had used it to summarise documents in the case before satisfying himself that they were accurate. He cited Medpro Healthcare v HMRC [2025] UKUT 255 (TCC) at [43] in confirming that “the critical underlying principle is that it must be clear from a fair reading of the decision that the judge has brought their own independent judgment to bear in determining the issues before them”.
The key messages for lawyers that have emerged by way of recent guidance then, are:
AI offers great opportunity: it comes with equally great responsibility.
A monthly data protection bulletin from the barristers at 5 Essex Chambers
The Data Brief is edited by Francesca Whitelaw KC, Aaron Moss and John Goss, barristers at 5 Essex Chambers, with contributions from the whole information law, data protection and AI Team.


