This is probably one of the most insane things I’ve ever heard about using artificial intelligence in a federal context. During a court hearing in the New York State Supreme Court Appellate Division’s First Judicial Department, a lawyer presenting a case was stopped by a panel of judges because, well, the lawyer wasn’t a real person; it was AI. It’s incredibly appealing that anyone would think that this would be an acceptable maneuver in any court case where honesty is the prime directive.
The incident was derived from a live stream on the New York State Supreme Court Appellate Division’s First Judicial Department YouTube page. In it, a panel of five judges were listening to an argument, and had accepted a video that was submitted by the appellant, Jerome Dewald. Of course, there’s nothing wrong with doing this, so the judges thought nothing of it when they asked to play the recording. However, it only played for a few moments before being questioned. Justice Sallie Manzanet-Daniels inquired on the kind of video it was before asking that it be turned off.
“It would have been nice to know that when you made your application. You did not tell me that sir. I don’t appreciate being misled.”
Dewald created the AI avatar to speak on his behalf as a pro se, meaning that he represents himself in court. That language and, honestly, the video of the clean-cut man with the perfect haircut prompted the Justice to question the video. Dewald apologized to the court, stating that he didn’t have a lawyer and had hoped the video would be able to be presented on his behalf without him having to practice it. If you skip to 1:42 in the video above, you will see exactly what the judges saw.
Unfortunately, this is not the first time artificial intelligence has been injected into the justice system in the hopes of being overlooked. In one case specifically, fake court rulings were cited in legal papers cited by a former personal lawyer of Donald Trump. It was deemed to be an error, but other situations involving AI avatars have also arisen.
It is becoming increasingly difficult to distinguish between what is real and what is created by artificial intelligence, especially when individuals like Dewald are intent on concealing their true motives. Even if it was merely an innocent mistake, the risk of the program leading to serious errors or causing additional legal issues is considerably high. Dewald’s case is still pending, but we will likely see more discussions about new laws regarding AI in the meantime.