In Estonia, citizens can now go to court to question small contractual claims in front of a… computer. In the US, two cases will be disputed in court for the first time, in February, with “legal assistance” from artificial intelligence programs, playing the role of lawyers.
The two disputed cases in the US are traffic fines for speeding. One hearing will be in person, in a courtroom, and the other via Zoom. Until now, automation programs have only helped drivers challenge parking tickets, in administrative procedures — not cutbacks. And cable company bills.
In Estonia, only cases with a value of less than 7,000 euros can rely on artificial intelligence (AI). The two parties to a dispute do the Upload in the system of its relevant information and the program issues a decision. But a dissatisfied party can appeal to a human judge. There are already 100 “robots” performing this function in the country.
Advantages and disadvantages
A joint study of six Australian organizations points out advantages and disadvantages of using functions powered by artificial intelligence in the courts. The researchers found two main advantages in Estonia’s experience: 1) it helps judges to reduce the backlog of cases; 2) frees judges to deal with more complex cases.
Another practical advantage comes from China. With natural language processing, through artificial intelligence, courts are able to automatically convert everything spoken by judges, lawyers, prosecutors and witnesses into text. Thus, the court provides, in real time, transcripts in Mandarin and English of judgments, which helps, mainly, people with hearing impairments.
Among the disadvantages of implementing these AI systems in the courts are the facts that: 1) they raise concerns about compatibility with core values of the operation of law; 2) they remove the human element from justice and, therefore, the moral authority and discretion of judges to apply the law; 3) given the secretive nature of AI systems, judges and parties are left in the dark about how decisions were generated.
The Australian study mainly criticizes the AI system used by US courts, COMPAS (Correctional Offender Management Profiling for Alternative Sanctions). Judges use this tool to decide on bail and parole, based on data that exposes recidivism or flight risks, rather than a psychological profile of the defendant and other customary human procedures.
It is also a fact that artificial intelligence models have a language problem. They were created to understand the English language correctly and clearly. They may not understand, for example, speech with an accent or mispronounced by non-native defendants or even filled with slang. And they certainly don’t understand other languages.
Furthermore, it has been noted that the system is biased against certain racial groups. For example, an investigation by journalists from the Pro-Publica website, which examined nearly 10,000 criminal cases in Florida, revealed that black defendants were named “high risk” (of recidivism, etc.) far more often than white defendants, that were identified as “low risk”, although this is not true.
One characteristic of the artificial intelligence that doubles as a lawyer is that it lacks the eloquence always appreciated by real lawyers: it whispers the text into the ear of the defendant, who has to use a Bluetooth headset and repeat, word per word (verbatim), whatever the machine “blows”.
The machine, which uses GPT technology (Generative Pretrained Transformer) by OpenAI, may in the future have some commercial value in the US, a country where the rate of citizens who practice self-defense (pro se) is very high, especially in civil actions.
But the startup DoNotPay, the company that created the machine, has not yet put its product on the market and there is no prediction of when it will be able to do so. For now, the company’s only target audience is the category of judges.
That’s because courts generally prohibit the use of artificial intelligence — and Bluetooth-enabled headphones — in courtrooms. DoNotPay hopes that, with the expected success of what they are calling a “robot lawyer”, the courts will change their rules.
The company also fears the reaction of the ABA (American Bar Association). He only announced that the two trials will take place in February, but did not reveal in which jurisdiction, date and time, nor the identity of the defendants, so that the local section of the ABA does not interfere in the performance of its “robot lawyer”, before he is tested in a real trial.
DoNotPay offered the two voluntary defendants a guarantee of payment of fines and other court costs, should they lose their case.
With information from USA Today, The Register, People and Australian websites LSJ Online and UNSW Sydney.
João Ozorio de Melo is a correspondent for the magazine Legal Adviser in the United States.