Cops Are Using AI for Suspect Sketches. What Could Go Wrong?

Cops Are Using AI for Suspect Sketches. What Could Go Wrong? - Professional coverage

According to Gizmodo, the Goodyear Police Department in Arizona is among the first in the U.S. to use AI-generated suspect sketches in active investigations. Forensic artist Mike Bonasera, who has drawn police sketches for about five years, used OpenAI’s ChatGPT to create the images for two cases in 2025: an attempted kidnapping in April and a shooting in November. He was cleared to use the tech by both department leaders and the Maricopa County Attorney’s Office. Despite a reported “flood of tips” after releasing the first AI image in April, neither of the cases has been solved, and no arrests have been made. Bonasera argues that traditional pencil drawings are often ignored by the public now, justifying the shift to more realistic-looking AI portraits.

Special Offer Banner

Realism Isn’t Reliability

Here’s the thing: just because an image looks more like a photograph doesn’t mean it’s more accurate. That’s the core warning from legal experts cited in the report. A law professor pointed out that in court, everyone understands the fallible, human process behind a hand-drawn sketch. But an AI-generated image? It’s a black box. No one—not the jury, the judge, or even the forensic artist—can fully explain how the model arrived at that specific face. It creates a massive evidence problem. The AI is stitching together a face based on patterns learned from its training data, which brings us to the next, even bigger issue.

Baked-In Bias

And that issue is bias. Experts warn that these AI models can bake in and amplify societal biases present in their training datasets. So, if a witness describes a suspect with vague terms, the AI might default to stereotypes. This isn’t a hypothetical. We’ve already seen law enforcement misuse AI for PR, like when the Westbrook Police Department in Maine had to apologize for using AI to alter a photo of seized drugs to make them look more dramatic. Now apply that same shaky tech to generating images of human suspects. The potential for misidentification and for reinforcing harmful biases is huge, and it could seriously undermine public trust.

A Shortcut With Long-Term Costs

Look, I get the appeal for police. A traditional sketch artist might only do a handful of drawings a year. An AI tool can iterate in seconds based on a witness’s feedback. It feels like a modern solution. But this feels like a shortcut that ignores the long-term legal and ethical costs. It prioritizes generating tips—which, let’s be honest, haven’t solved the cases yet—over the integrity of the investigative process. Once these images are out there, they can shape public perception in a powerful, and potentially erroneous, way. Basically, we’re trading a understood, if imperfect, human tool for a flashy, inscrutable algorithm. That seems like a dangerous trade, especially when someone’s liberty is on the line.

Leave a Reply

Your email address will not be published. Required fields are marked *