Pages

Tuesday, May 30, 2023

AI Chatbots, and the Courts, and the Lawyers

Are AI Chatbots in Courts putting Justice at risk ?

The use of AI in the criminal justice system is growing quickly worldwide, from the popular DoNotPay chatbot lawyer mobile app to robot judges in Estonia adjudicating small claims and AI judges in Chinese courts.... Judges from India to Colombia are using robot lawyers, but experts warn of pitfalls such as false information and algorithmic bias.

"ChatGPT can make up laws and rulings that don't exist. In my view it shouldn't be used for anything important." 

There have been numerous examples of chatbots getting information wrong or making up plausible but incorrect answers - which have been dubbed "hallucinations" - such as inventing fictional articles and academic papers.

There are also concerns over privacy violations and exploitation of judicial data for profit.(Context News)


 After the Colombian Judge used ChatGPT to pronounce an order, the Punjab and Haryana High Court Judge took the assistance of ChatGPT, while deciding a bail matter in a murder case. 

The Colombian Judge Juan Manuel Padilla Garcia said he used the AI tool - ChatGPT to ask legal questions about a case and included its responses in his decision, according to a court document dated 30 January 2023. Besides including ChatGPT’s responses to these questions, the judge also incorporated his own legal arguments and clarified that the AI was used to "extend the arguments of the adopted decision." 

But this faced many criticisms. 

Many professionals came up with their disagreements in this case. Prof Juan David Gutierrez from Rosario University said, “there is a need for urgent digital literacy training for judges”. A judge in Colombia’s Supreme Court, Octavio Tejeiro, said, “AI has instigated a moral panic in law as people feared robots would replace judges”. But he also shared his thought on the future acceptance of the AI tool by common people. He said that using ChatGPT for judgement is unethical and misleading, as they can be imperfect and propose wrong answers. “It must be seen as an instrument that serves the judge to improve his judgment. We cannot allow the tool to become more important than the person Tejeiro added.”

When checked on what ChatGPT would say if the Courts / Judges in India used it too, the results were uncannily similar to the judgments already pronounced by the Indian Courts.  

1. Can Amitabh Bachchan’s pictures, voice and name be used without his consent?

Delhi High Court in November last year passed an interim order restraining persons at large from infringing the personality and publicity rights of Bollywood actor Amitabh Bachchan.

 2. Can reservations be granted solely on the basis of economic criteria?

Supreme Court in its majority verdict pronounced in November last year, upheld the validity of the 103rd Constitutional Amendment Act 2019, which introduced 10 percent reservations for Economically Weaker Sections (EWS) in government jobs and educational institutions.

3. Can forcible sexual intercourse between a husband and his wife in a marital relationship be labeled as rape?

Supreme Court is in the process of hearing petitions challenging the exclusion of marital rape from the definition of rape in the Indian Penal Code (IPC 375).

However, in May last year, the Delhi High Court had delivered a split verdict on petitions seeking criminalisation of marital rape – while one judge said “legitimate expectation of sex” is an “inexorable” aspect of marriage, the other said the “right to withdraw consent at any given point in time forms the core of the woman’s right to life and liberty”. 

Additionally, in September last year, a Supreme Court bench, while passing a verdict on unmarried women being entitled to seek abortion in the 20-24  week-period had held that the meaning of rape must be held to include "marital rape" for the purpose of the Medical Termination of Pregnancy Act and Rules.

 “To explore the use of AI in the judicial domain, the Supreme Court of India has constituted Artificial Intelligence Committee which has mainly identified application of AI technology in translation of judicial documents; legal research assistance and process automation,” Union Law Minister Kiren Rijiju had said.

> ‘Supreme Court Vidhik Anuvaad Software’ (SUVAS), which is being used to translate judgments from English into Indian languages

> ‘Supreme Court Portal for Assistance in Courts Efficiency’ (SUPACE), a tool to help judges conduct legal research.

Source - thequint

In the midst of all these, the Punjab & Haryana High Court in Chandigarh, India, recently employed an artificial intelligence (AI) chatbot called ChatGPT to assist in a bail case. Justice Anoop Chitkara used the tool to gain a wider perspective on the matter and to determine bail jurisprudence in a case where an assailant used cruelty. 

The chatbot - ChatGPT said :

"The severity of the assault, the defendant's criminal history, and the strength of the evidence against them are all factors that a judge will consider when deciding whether to grant bail. It is important to note that the presumption of innocence is a fundamental principle of the justice system, and all defendants are entitled to bail unless there are compelling reasons to deny it. Therefore even in cases where the assailants have assaulted with cruelty, they may still be granted bail if the judge determines that they do not post a risk to the community or a flight risk," ChatGPT replied in response to Justice Chitkara's query.

The judge in his order clarified: "Any reference to ChatGPT and any observation made hereinabove is neither an expression of opinion on the merits of the case nor shall the trial Court advert to these comments. This reference is only intended to present a broader picture on bail jurisprudence, where cruelty is a factor." The court subsequently dismissed the bail plea. (Source)

 

As per a BBC news report:


A New York lawyer is facing a court hearing of his own after his firm used AI tool ChatGPT for legal research. A judge said the court was faced with an "unprecedented circumstance" after a filing was found to reference example legal cases that did not exist. 

"Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations," Judge Castel wrote in an order,

 issuing a show-cause to the Lawyer, who apologized saying:

that he "greatly regrets" relying on the chatbot, which he said he had never used for legal research before and was "unaware that its content could be false". He has vowed to never use AI to "supplement" his legal research in future "without absolute verification of its authenticity". (BBC)

 Google cautions against 'hallucinating' chatbots, and warned against the pitfalls of artificial intelligence in chatbots. (Reuters)

Italian data-protection authority said it would ban and investigate OpenAI "with immediate effect", as there were privacy concerns relating to the model, which was created by US start-up OpenAI and is backed by Microsoft. The regulator said that not only would it block OpenAI's chatbot but it would also investigate whether it complied with General Data Protection Regulation. GDPR governs the way in which we can use, process and store personal data. (BBC)

Italian data-protection authority said OpenAI had 20 days to say how it would address the watchdog's concerns, under penalty of a fine of €20 million ($21.7m) or up to 4% of annual revenues. (BBC)

The Hype around these chatbots is slowly being diluted, by some of the glaring erroneous use cases athat are being reported in the news.  

With the breathless hype that has been spun up around ChatGPT and the underlying Large Language Models (LLMs) such as GPT-3 and GPT-4, to the average person it may seem that we have indeed entered the era of hyperintelligent, all-knowing artificial intelligence. Even more relevant to the legal profession is that GPT-4 seemingly aced the Uniform Bar Exam, which led to many to suggest that perhaps the legal profession was now at risk of being taken over by ‘AI’. Yet the evidence so far suggests that LLMs are, if anything, mostly a hindrance to attorneys, as these LLMs have no concept of what is ‘true’ or ‘false’. (Hackaday)

Chatbots like ChatGPT have been known to create fictional responses that appear to have no connection to information found elsewhere online.

In a case before the U.S. Supreme Court, whether a U.S. law that protects technology platforms from legal responsibility for content posted online by their users also applies when companies use algorithms to target users with recommendations, is likely to be decided by the end of June 2023. This decision may also have a bearing on where an AI model generated a potentially harmful response, and whether they should be protected from legal claims like defamation or privacy violations, according to technology and legal experts.

Hany Farid, a technologist and professor at the University of California, Berkeley, said that it stretches the imagination to argue that AI developers should be immune from lawsuits over models that they "programmed, trained and deployed."

"When companies are held responsible in civil litigation for harms from the products they produce, they produce safer products," Farid said. "And when they're not held liable, they produce less safe products." (Reuters)

Interesting times ahead. 

To be continued..