Pages

Thursday, September 14, 2023

eCourts Phase III for India - Judiciary

Ease of justice is equally important as ease of living. Denial of ‘timely justice’ amounts to denial of ‘justice’ itself: Timely disposal of cases is essential to maintain rule of law and provide access to justice. 

Speedy trial is a part of right to life and liberty guaranteed under Article 21 of the Constitution.

Erodes social infrastructure: a weak judiciary has a negative effect on social development

Affects human rights: Overcrowding of the prisons, already infrastructure deficient, in some cases beyond 150% of the capacity, results in “violation of human rights”.

Affects the economy of the country as it was estimated that judicial delays cost India around 1.5% of its Gross Domestic Product annually.

The digital transformation of India’s judiciary to ensure accessible and efficient legal proceedings is imperative to address the cncerns plaguing the justice system in India. The implementation of the eCourts Mission Mode Project, as part of the National eGovernance Plan, stands as a testament to the government’s commitment towards modernizing the judicial system. 

 As part of the National eGovernance Plan, the e-Courts Project is under implementation since 2007 for ICT enablement of the Indian Judiciary the Phase II of which has concluded in 2023. Phase III of the e-Courts Project in India is rooted in philosophy of “access and inclusion”.

Taking the gains of Phase-I and Phase-II to the next level, the e-Courts Phase-III aims to usher in a regime of maximum ease of justice by moving towards digital, online and paperless courts through digitization of the entire court records including legacy records and by bringing in universalization of e-Filing/ e-Payments through saturation of all court complexes with e-Sewa Kendras. It will put in place intelligent smart systems enabling data-based decision making for judges and registries while scheduling or prioritizing cases.

The Centrally Sponsored Scheme of eCourts Phase III is being implemented under the joint partnership of Department of Justice, Ministry of Law & Justice, Government of India and eCommittee, Supreme Court of India, in a decentralized manner through the respective High Courts to develop a judicial system that would promote ease of justice by making the system more accessible, affordable, reliable, predictable, and transparent for all stakeholders.

Expected outcomes of the eCourts Phase III, are as follows:

• Citizens who do not have access to technology can access the judicial services from eSewa Kendras, thus bridging the digital divide.

• Digitization of court records lays the foundation for all other digital services in the project. It enables processes to become more environmental friendly by minimizing paper-based filings and reducing the physical movement of documents.

• Virtual participation in the court proceedings thus reducing costs associated with court proceedings, such as travel expenses for witnesses, judges, and other stakeholders.

• Payment of court fees, fines and penalties from anywhere, anytime.

• Expansion of eFiling for reducing the time and effort required to file documents. Thereby minimizing human errors as documents are automatically checked and also prevent further creation of paper based records.

• Use of latest technologies like Al and its subsets Machine Learning (ML), Optical Character Recognition (OCR), Natural Language Processing (NLP) to provide a smoother user experience by building a “smart” ecosystem. Registries will have less data entry and minimal file scrutiny facilitating better decision-making and policy planning. It envisages smart scheduling, intelligent system that enables data-based decision making for judges and registries, allows for greater predictability and optimisation of the capacity of judges and lawyers.

• Expansion of virtual courts beyond the adjudication of traffic violation cases, thereby eliminating the presence of litigant or lawyer in the court.

• Enhanced accuracy and transparency in court proceedings

• Emphasis on automated delivery of court summons by further expanding the NSTEP (National Serving and Tracking of Electronic Processes), hence drastically reducing the delays in trials.

• Use of emerging technologies in court processes will make them more efficient and effective, hence contributing significantly towards the reduction of pendency cases.


The e-Committee of the Supreme Court has been overseeing the implementation of the e-Courts Project, conceptualized under the “National Policy and Action Plan for Implementation of Information and Communication Technology (ICT) in the Indian Judiciary-2005”. 

Building over the advancements made in Phases I & II of the project, this document articulates the need to exponentially advance the digitization of courts by:
(a) simplifying procedures,
(b) creating a digital infrastructure, and the
(c) establishment of the right institutional and governance framework, such as technology offices at various levels to enable the judiciary to appropriately employ technology. 

It articulates key goals for putting in place the digital infrastructure and services for Phase III.

To read the Vision Document see here

Source: eCommitteeSCI

Tuesday, May 30, 2023

AI Chatbots, and the Courts, and the Lawyers

Are AI Chatbots in Courts putting Justice at risk ?

The use of AI in the criminal justice system is growing quickly worldwide, from the popular DoNotPay chatbot lawyer mobile app to robot judges in Estonia adjudicating small claims and AI judges in Chinese courts.... Judges from India to Colombia are using robot lawyers, but experts warn of pitfalls such as false information and algorithmic bias.

"ChatGPT can make up laws and rulings that don't exist. In my view it shouldn't be used for anything important." 

There have been numerous examples of chatbots getting information wrong or making up plausible but incorrect answers - which have been dubbed "hallucinations" - such as inventing fictional articles and academic papers.

There are also concerns over privacy violations and exploitation of judicial data for profit.(Context News)


 After the Colombian Judge used ChatGPT to pronounce an order, the Punjab and Haryana High Court Judge took the assistance of ChatGPT, while deciding a bail matter in a murder case. 

The Colombian Judge Juan Manuel Padilla Garcia said he used the AI tool - ChatGPT to ask legal questions about a case and included its responses in his decision, according to a court document dated 30 January 2023. Besides including ChatGPT’s responses to these questions, the judge also incorporated his own legal arguments and clarified that the AI was used to "extend the arguments of the adopted decision." 

But this faced many criticisms. 

Many professionals came up with their disagreements in this case. Prof Juan David Gutierrez from Rosario University said, “there is a need for urgent digital literacy training for judges”. A judge in Colombia’s Supreme Court, Octavio Tejeiro, said, “AI has instigated a moral panic in law as people feared robots would replace judges”. But he also shared his thought on the future acceptance of the AI tool by common people. He said that using ChatGPT for judgement is unethical and misleading, as they can be imperfect and propose wrong answers. “It must be seen as an instrument that serves the judge to improve his judgment. We cannot allow the tool to become more important than the person Tejeiro added.”

When checked on what ChatGPT would say if the Courts / Judges in India used it too, the results were uncannily similar to the judgments already pronounced by the Indian Courts.  

1. Can Amitabh Bachchan’s pictures, voice and name be used without his consent?

Delhi High Court in November last year passed an interim order restraining persons at large from infringing the personality and publicity rights of Bollywood actor Amitabh Bachchan.

 2. Can reservations be granted solely on the basis of economic criteria?

Supreme Court in its majority verdict pronounced in November last year, upheld the validity of the 103rd Constitutional Amendment Act 2019, which introduced 10 percent reservations for Economically Weaker Sections (EWS) in government jobs and educational institutions.

3. Can forcible sexual intercourse between a husband and his wife in a marital relationship be labeled as rape?

Supreme Court is in the process of hearing petitions challenging the exclusion of marital rape from the definition of rape in the Indian Penal Code (IPC 375).

However, in May last year, the Delhi High Court had delivered a split verdict on petitions seeking criminalisation of marital rape – while one judge said “legitimate expectation of sex” is an “inexorable” aspect of marriage, the other said the “right to withdraw consent at any given point in time forms the core of the woman’s right to life and liberty”. 

Additionally, in September last year, a Supreme Court bench, while passing a verdict on unmarried women being entitled to seek abortion in the 20-24  week-period had held that the meaning of rape must be held to include "marital rape" for the purpose of the Medical Termination of Pregnancy Act and Rules.

 “To explore the use of AI in the judicial domain, the Supreme Court of India has constituted Artificial Intelligence Committee which has mainly identified application of AI technology in translation of judicial documents; legal research assistance and process automation,” Union Law Minister Kiren Rijiju had said.

> ‘Supreme Court Vidhik Anuvaad Software’ (SUVAS), which is being used to translate judgments from English into Indian languages

> ‘Supreme Court Portal for Assistance in Courts Efficiency’ (SUPACE), a tool to help judges conduct legal research.

Source - thequint

In the midst of all these, the Punjab & Haryana High Court in Chandigarh, India, recently employed an artificial intelligence (AI) chatbot called ChatGPT to assist in a bail case. Justice Anoop Chitkara used the tool to gain a wider perspective on the matter and to determine bail jurisprudence in a case where an assailant used cruelty. 

The chatbot - ChatGPT said :

"The severity of the assault, the defendant's criminal history, and the strength of the evidence against them are all factors that a judge will consider when deciding whether to grant bail. It is important to note that the presumption of innocence is a fundamental principle of the justice system, and all defendants are entitled to bail unless there are compelling reasons to deny it. Therefore even in cases where the assailants have assaulted with cruelty, they may still be granted bail if the judge determines that they do not post a risk to the community or a flight risk," ChatGPT replied in response to Justice Chitkara's query.

The judge in his order clarified: "Any reference to ChatGPT and any observation made hereinabove is neither an expression of opinion on the merits of the case nor shall the trial Court advert to these comments. This reference is only intended to present a broader picture on bail jurisprudence, where cruelty is a factor." The court subsequently dismissed the bail plea. (Source)

 

As per a BBC news report:


A New York lawyer is facing a court hearing of his own after his firm used AI tool ChatGPT for legal research. A judge said the court was faced with an "unprecedented circumstance" after a filing was found to reference example legal cases that did not exist. 

"Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations," Judge Castel wrote in an order,

 issuing a show-cause to the Lawyer, who apologized saying:

that he "greatly regrets" relying on the chatbot, which he said he had never used for legal research before and was "unaware that its content could be false". He has vowed to never use AI to "supplement" his legal research in future "without absolute verification of its authenticity". (BBC)

 Google cautions against 'hallucinating' chatbots, and warned against the pitfalls of artificial intelligence in chatbots. (Reuters)

Italian data-protection authority said it would ban and investigate OpenAI "with immediate effect", as there were privacy concerns relating to the model, which was created by US start-up OpenAI and is backed by Microsoft. The regulator said that not only would it block OpenAI's chatbot but it would also investigate whether it complied with General Data Protection Regulation. GDPR governs the way in which we can use, process and store personal data. (BBC)

Italian data-protection authority said OpenAI had 20 days to say how it would address the watchdog's concerns, under penalty of a fine of €20 million ($21.7m) or up to 4% of annual revenues. (BBC)

The Hype around these chatbots is slowly being diluted, by some of the glaring erroneous use cases athat are being reported in the news.  

With the breathless hype that has been spun up around ChatGPT and the underlying Large Language Models (LLMs) such as GPT-3 and GPT-4, to the average person it may seem that we have indeed entered the era of hyperintelligent, all-knowing artificial intelligence. Even more relevant to the legal profession is that GPT-4 seemingly aced the Uniform Bar Exam, which led to many to suggest that perhaps the legal profession was now at risk of being taken over by ‘AI’. Yet the evidence so far suggests that LLMs are, if anything, mostly a hindrance to attorneys, as these LLMs have no concept of what is ‘true’ or ‘false’. (Hackaday)

Chatbots like ChatGPT have been known to create fictional responses that appear to have no connection to information found elsewhere online.

In a case before the U.S. Supreme Court, whether a U.S. law that protects technology platforms from legal responsibility for content posted online by their users also applies when companies use algorithms to target users with recommendations, is likely to be decided by the end of June 2023. This decision may also have a bearing on where an AI model generated a potentially harmful response, and whether they should be protected from legal claims like defamation or privacy violations, according to technology and legal experts.

Hany Farid, a technologist and professor at the University of California, Berkeley, said that it stretches the imagination to argue that AI developers should be immune from lawsuits over models that they "programmed, trained and deployed."

"When companies are held responsible in civil litigation for harms from the products they produce, they produce safer products," Farid said. "And when they're not held liable, they produce less safe products." (Reuters)

Interesting times ahead. 

To be continued..