The Supreme Court of England and Wales says that lawyers must take stronger measures to prevent abuse of artificial intelligence in their work.
In a decision Combining two recent cases, Judge Victoria Sharp wrote that AI genetic tools such as chatgpt “are unable to conduct reliable legal research”.
“These tools can obviously produce coherent and reasonable answers to prompts, but these coherent and possible answers can prove completely wrong,” writes Judge Sharp. “Answers can make sure claims that are simply untrue.”
This does not mean that lawyers cannot use AI in their investigation, but said they have a professional duty “to check the accuracy of such a survey with reference to valid sources before using it during their work”.
Judge Sharp proposed that the increasing number of cases where lawyers (including the US side, lawyers representing important AI platforms) said the decision would be promoted to professional bodies, including linear councils and law.
In one of these cases, a lawyer representing a man who was seeking compensation against two banks filed a deposit with 45 reports – 18 of these cases, while many others “did not contain the references given to them, did not support the proposals for which Judge said.
In the other, a lawyer representing a man who had been expelled from his home in London wrote a court he archived by citing five cases that do not seem to exist. (The lawyer refused to use AI, although he said that reports may come from AIs created by AI that appeared in “Google or Safari”.) Judge Sharp said while the court decided not to start contempt procedures, this is not “previous”.
“Lawyers who do not comply with their professional obligations in this respect are at risk of serious sanctions,” he added.
Both lawyers either reported or reported to professional regulators. Judge Sharp noted that when lawyers do not meet their duties in court, court powers range from “publicly prompting” costs, contempt for proceedings or even “referral to the police”.
