Lord Justice Birss, a prominent figure in the realm of UK intellectual property law, admitted to using OpenAI’s ChatGPT chatbot to draft a section of a judgment. This marks the first known instance of a British judge leveraging the capabilities of ChatGPT for such a purpose.
The AI tool was tasked with summarizing a specific area of law, and the judge found the generated paragraph satisfactory for inclusion in his ruling. While Birss emphasized his full accountability for the content of his judgment, this development underscores the increasing intersection of technology and the legal profession.
Global Implications and Concerns
The use of AI in legal proceedings isn’t confined to the UK. In Colombia, Judge Juan Manuel Padilla consulted ChatGPT while deciding on a case involving an autistic child’s medical expenses. The AI’s response aligned with the judge’s final verdict. However, not all interactions with AI tools have been positive. In New York, two attorneys faced penalties for relying on ChatGPT to draft a legal brief, which unfortunately contained fictitious case references. This incident serves as a cautionary tale about the potential pitfalls of over-reliance on AI in sensitive domains.
The Broader Legal Landscape
Amidst these technological advancements, the legal community remains divided on various issues. At a recent Law Society conference, James Perry, chair of the Society’s dispute resolution committee, criticized judges for their silence on certain government reforms.
Perry’s concerns revolved around the potential harm these reforms could inflict on civil justice. Lord Justice Birss, who was present, responded by expressing optimism about the potential of technological innovations like ChatGPT in the legal sector. However, he reiterated the importance of human responsibility in the final outcomes.
A recent study from the University of Montana found that ChatGPT surpasses 99% of college students in creativity tests. The chatbot, which is powered by OpenAI’s GPT-4 large language model (LLM), scored in the top 1% in tests assessing creativity, surpassing the performance of most college students.
ChatGPT produced eight answers that were evaluated along with 24 responses from UM students. The scores were then contrasted with the national average of 2,700 college students who took the TTCT in 2016. All submissions were graded by Scholastic Testing Service, a standardized test provider, without knowing which ones were from AI. ChatGPT performed exceptionally well in fluency and originality, ranking in the highest percentile. However, it scored slightly lower, at the 97th percentile, for flexibility.
Last Updated on November 8, 2024 11:19 am CET