illustration of a pixelated judge's gavel hovering above robotic hands
Judges in England and Wales got the green light to use AI in some tasks.
  • Judges in England and Wales will be allowed to use AI to help perform some of their tasks. 
  • The UK Judicial Office issued guidance stating how they are permitted to use it.
  • It comes after a lawyer used ChatGPT to write a court brief that had six fake cases in it. 

Judges in England and Wales have been given the nod of approval to use AI in parts of their job. 

The decision comes despite instances where AI has been used in the legal system, resulting in predictably bad results.

The UK Judicial Office, which oversees judges, issued guidance Tuesday laying out how AI can be useful for some tasks. It also highlights areas where the use of AI is not advised.

For ways that AI tools can be helpful with tasks, the guidance states that it can help summarize large amounts of text and write presentations, emails, and a court's decision on a case. However, it did seem to warn against using it for legal research and analysis. 

That's probably because of how badly it backfired when a lawyer tried to use AI in court earlier in the year and it cited fake cases as a result of "hallucinations." 

A lawyer used ChatGPT to write a court brief that ended up having references to six fake cases. The law firm he worked for was fined $5,000 after the court couldn't locate the cited cases. 

In another instance this year, a woman used AI to represent herself instead of hiring a lawyer. She used an AI chatbot to submit case law in an appeal for a tax penalty she received, The Telegraph reported in December. 

The woman lost the appeal after it was discovered that the nine cases she cited as part of her defense were made up, although she'd been unaware of that fact, the report said.  

The Judicial Office's guidance tried to alert judges to such dangers and said they "must be alive to the potential risks." It warned that AI could be used by members of the public in cases or worse yet — to create fake evidence. 

"AI chatbots are now being used by unrepresented litigants," the guidance said. "They may be the only source of advice or assistance some litigants receive. Litigants rarely have the skills independently to verify legal information provided by AI chatbots and may not be aware that they are prone to error." 


Read the original article on Business Insider