High Court warns lawyers over AI misuse

Published:
June 9, 2025 6:05 PM
Need to know

A senior High Court judge has issued a warning about the risks of AI misuse after two cases involving fake AI-generated case citations.

Dame Victoria Sharp emphasised the need for greater oversight and responsibility, singling out managing partners, heads of chambers and regulators.

The High Court delivered a firm rebuke to the legal profession over the misuse of AI in law on Friday (6 June), reviewing two cases in which counsel submitted fictitious, AI-generated case citations to the court.

In her judgment, Dame Victoria Sharp, president of the King’s Bench Division, underscored the risks of relying on AI tools in legal work, calling for tighter oversight and greater responsibility within the profession.

Advertisement

The cases

In one case that has been widely reported, pupil barrister Sarah Forey cited five non-existent legal authorities while preparing for a judicial review hearing where she was representing Haringey Law Centre. Forey denied using AI directly but admitted she may have inadvertently incorporated AI-generated summaries from Google or Safari into her work.

In the other case, solicitor Abid Hussain relied on a client’s research that included 18 fictitious cases, all of which were later discovered to be AI-generated fabrications.

The court ruled that Forey’s actions met the threshold for contempt of court but chose not to pursue formal proceedings given her “extremely junior” status and absence of intent. The court did refer her to the Bar Standards Board (BSB), however. Similarly, while Hussain’s reliance on his client’s research was deemed “extraordinary", no further action was taken beyond referring the matter to the SRA.

Both Forey and Hussain have also referred themselves to their respective regulators, the SRA and BSB, for further investigation.

Dame Sharp’s warning

In her judgment, Dame Sharp found that the current guidance for judges, barristers and solicitors around AI "is insufficient to address the misuse of artificial intelligence".

"More needs to be done", she said, to ensure that |every individual currently providing legal services within this jurisdiction… understands and complies with their professional and ethical obligations and their duties to the court if using artificial intelligence".

While acknowledging that AI tools can be useful, Dame Sharp stressed that they are "not capable of conducting reliable legal research". She noted that tools like ChatGPT can produce seemingly plausible but ultimately inaccurate or fabricated information, so-called "hallucinations".

Dame Sharp singled out those with "individual leadership responsibilities", namely managing partners and heads of chambers as well as regulators, to bring about "practical and effective measures" for change.

The ruling comes amid an enormous uptick in legal AI use in recent years. A 2024 survey by LexisNexis revealed that 41% of lawyers in the UK are currently using AI tools in their work, up from just 11% in July 2023.

Law Society response

In response to the judgment, Ian Jeffery, CEO of the Law Society of England and Wales, commented: "This High Court judgment lays bare the dangers of using AI in legal work". He added: "Artificial intelligence tools are increasingly used to support legal service delivery. However, the real risk of incorrect outputs produced by generative AI requires lawyers to check, review and ensure the accuracy of their work".

The Law Society, Jeffery said, "has already provided guidance and resources to the legal profession. We will continue to develop and expand them as part of our wider programme of supporting members with technology adoption".

Advertisement