X
Government

In a first of its kind ruling, a US judge bars using AI-enhanced video in a murder trial

As governments decide how to deal with AI, a judge in Washington has ruled AI-enhanced video cannot be used as evidence.
Written by Artie Beaty, Contributing Writer
Judge and documents on office desk Legislation
Juststock via iStock/Getty Images

In a first of its kind ruling, a Washington judge presiding over a triple murder case has decided that AI-enhanced video cannot be submitted as evidence.

In explaining his decision, Judge Leroy McCullough expressed concerns that AI technology uses opaque methods to show what it "thinks should be shown" and that it could potentially confuse jurors and undermine the testimony of eyewitnesses. Judge McCullough added that accepting the video into evidence could lead to "a time-consuming trial within a trial about the non-peer-reviewable-process used by the AI model."

Also: In a win for humans, federal judge rules that AI-generated artwork can't be copyrighted

The video in question relates to a 2021 shooting outside La Familia Sports Pub and Lounge in Des Moines, Washington, in which  three people were killed and two more were wounded. A man named Joshua Puloka is accused of the murders, but he claims he was trying to de-escalate an argument between two people when he was shot at and acted in self defense when he fired back and struck bystanders.

A 10-second cell phone video posted on Snapchat allegedly shows the shooting. Puloka's defense team tried to submit a version enhanced by machine learning to show things more clearly. In defending the AI-improved video, the defense team said the original has a low resolution and a good deal of motion blur, while the upscaled version added sharpness, definition, and smoother edges to objects. According to K5 news in Seattle, the new video has 16 times more pixels than the first.

The Hill reported that the prosecution in the case argued there's no legal precedent for use of such video and went on to call the enhanced video "inaccurate, misleading, and unreliable."

The judge's ruling comes at a time when governments all around the world are grappling with how to handle AI. Just last week, The White House issued plans to regulate the government's use of AI, and in October 2023, the UN assembled an advisory team to coordinate "inclusive" AI governance. Also in October 2023, President Joe Biden issued an executive order that "establishes new standards for AI safety and security."

Editorial standards