The image you see above, a "tranquil river in the forest," is a still photo captured from a video found linked to a New York Times article, "Instant Videos Could Represent the Next Leap in A.I. Technology." I am not sure whether The Times' paywall will permit non-subscribers to read the article, and to view the embedded video, but you can give it a click, to see. The "video," as you wil discover if you are able to read the article, is totally fake. An A.I. program made it up.
Since I teach a course at UCSC called, "Privacy, Technology, And Freedom," I pay close attention to news stories talking about the newest thing in "tech." As anyone reading this blog posting is probably aware, A.I., or "Artificial Intelligence," is the latest "big thing." In fact, the edition of The New York Times that carried the story about "instant videos," which I have linked above, also had an article on "Police Tech." That's worth reading, too, if you are interested in the topic of how continued police surveillance of everyone is likely to change our lives.
A lawyer-like thought came to my mind as I read the article about the use of A.I. to produce "instant videos" that will soon be so realistic that it will be virtually impossible to discriminate between a video that is "made up" by an A.I. program and an "actual video," which accurately depicts something that really happened, in "real life."
When it is no longer possible to tell the difference, which is what The New York Times article predicts will be the case rather soon, then the use of videos to convict criminal defendants of crime will be - the way I see it - significantly impaired.
Right now, doorbell videos, videos taken by bystanders, who are observing criminal conduct, and similar video evidences of crime are admissible in court, and are used to prove guilt. But what if there will soon be a significant chance, as to any video produced to show a crime in progress, that the video is not actually depicting "real" events, but is simply an A.I.-produced "deepfake"?
Has anyone thought about that? Criminal defendants are given the "benefit of the doubt," when evidence is produced against them in court. If even one juror has a "reasonable doubt" that a defendant is guilty, that criminal defendant will not be convicted. Proof "beyond a reasonable doubt" is the standard used to establish criminal conduct. Given that fact about criminal law, many people who are not "fans" of our former president worry about whether it was really wise to bring the recent charges against him (reported on in the same edition of The New York Times, by the way, that reports on the recent A.I. advances I am commenting on in this blog posting). The concern is that at least one juror, in the jury ultimately called upon to judge the evidence, will have a "reasonable doubt" about some aspect of the charges against Mr. Trump. That will mean, if one juror does have such a "reasonable doubt," that our former president will not be convicted of the crimes charged against him, with imponderable political effects.
Well, back to my main point. When A.I. is able to create semblances of "reality" that cannot be effectively distinguished from "real evidence," then EVERY criminal defendant, entitled to be given the benefit of any "reasonable doubt" about the evidence produced against that defendant, will be able to raise such a "reasonable doubt" about even the most authentic evidence that police agencies produce.
Again, has anybody thought about this? That could really make it hard to convict real criminals of their real crimes.
Another benefit of a life lived in our new, "high-tech" world!
Image Credit:
https://www.nytimes.com/2023/04/04/technology/runway-ai-videos.html
Gary, Jum Wheaton here. Nice thought piece. And thanks cuz I am being interviewed on Tuesday about the legal implications of AI and deep fakes. And dont know what to say beyond, “oh shit.”
ReplyDeleteBut your example has a rejoinder methinks. Documents have been with us for millennia. They are evidence. But they can be forged. They can be altered after the fact. Thus documents are not automatically admitted into evidence. The party seeking to use a document must authenticate it and provide a witness to testify that the court is being offered a “true and correct copy.” That testimony can be tested and challenged.
The same tools can be brought to bear on videos. Though as you point out it will be used to try to attack and prevent admission of *real* videos. But once admitted a juror cannot “retry” the authenticity or admissibility of any evidence.
Seems a bit like old wine in new bottles…
Thanks again.