Hello!
We’re a little behind this month, but in our defense, Hannah got married, Josie was in attendance, and then we both had to recover. It was a wonderful time, where we learned Josie really loves the song Walking in Memphis.
Now, on to the slightly less cheery content you know and love!
It feels difficult to avoid talking about AI right now – whether it’s the environmental impacts (bad), the rampant use by students (also probably bad), or the ways it’s wriggled into the criminal legal system (Bad with a capital B). On that last front, there’s a lot to dive into. From chatbots drafting police reports to algorithms nudging judges on bail decisions, AI is being slotted into nearly every corner of the criminal legal system. Public agencies are moving fast, contracting with private companies to adopt tools that promise efficiency, safety, or cost savings, but rarely come with accountability (or those aforementioned promises either). This creates a rich reporting environment for journalists to investigate how these systems work (or don’t), who profits, and who gets harmed.
We want to zoom in on the increasing use of AI in policing and incarceration, starting with a high-level Q&A with our friend Matthew Guariglia of the Electronic Frontier Foundation.
What are the most common ways AI is currently being used in policing and incarceration, and what important uses or consequences do you think are flying under the radar in mainstream media coverage?
Outsourced decision making happens on the street level, like deciding if a person looks like a suspect or if a person has a record and associations which has deemed them “likely to commit a crime,” but also often plays a key role in decisions about bail, sentencing, and eventual release—all without the defendant knowing that technology is making the decision. Part of the problem is police, who claim these AI-generated tips are just an “investigatory lead,” are often going out and arresting people immediately if their names are being produced by face recognition, automated license plate readers, or other forms of algorithmic recognition and decision making. Sometimes, we’ve seen that the people who have had their names produced by this tech were not even in the state on the day a certain crime was committed, but police arrest them anyway (on the recommendation of the AI) and it might take that person months, if not over a year, to get out of jail while they await trial.
This tech flies under the radar because we have no universal set standards when it comes to disclosing when the technology is used, allowing defense attorneys to understand how the (proprietary and profitable) technology even works, or allowing defendants to appeal decisions made by computers. In most cities, police don’t even need to reveal to or seek approval from elected officials, let alone the general public, regarding what technology they buy, how they use it, and how it works.
What are the biggest gaps in transparency or accountability when it comes to AI being used in the criminal legal system, and how can journalists push for access to information that’s often hidden behind proprietary tech or “public safety” exemptions?
Right now, at the local, state, and federal level, people are scrambling to understand exactly what decisions in the criminal justice system are being made by unaccountable, non-transparent third-party AI, and whether it is possible to audit how those decisions were made and if it is possible to appeal them. Journalists can play a big role in forcing transparency by investigating what contracts with AI companies police departments hold, comparing how those companies claim their technology works versus the effects it’s actually had in other cities, and most importantly, covering the nation-wide movement to implement local laws to disclose and protect people from outsourced decision making in the criminal justice system.
What role do private tech vendors play in shaping how AI is deployed for “public safety” purposes, and what should journalists be asking about those partnerships?
I think people would be shocked if they knew how much of policing right now, both in policy and in practice, is being dictated by profitable private corporations. They make tools that dictate to police who should be a suspect and who is likely to commit a crime — opening people up to police harassment and surveillance.
These companies also have incredibly effective marketing departments, often made of up former police, who know how to pitch technology (that can often be invasive, ineffective, or totally useless) so that both police and the press report on it as if it’s a silver bullet that will end crime forever. When police buy this technology, it usually comes with pre-written marketing materials like talking points, statistics, and press releases to be used when the technology assists in making an arrest. Local reporters often reprint these claims verbatim, often unaware that this material was written by the companies themselves, and not by the police.
In this way, police have become advertisers and influencers for a company that is able to launder their marketing materials through the supposedly objective and authoritative voice of public police departments. Journalists should think more critically about where data about the effectiveness of surveillance technology comes from, they should try to get access to communications and documents shared between companies and police departments — and most importantly, all journalists in the United States should be scrutinizing the cozy and profitable relationship between police departments and these billion-dollar tech companies.
—
As Matthew points out, many of these tools operate with little oversight—yet they’re already shaping high-stakes outcomes. We’ve highlighted a few important uses of these technologies below, along with potential opportunities for reporting on them.

AI-Generated Police Reports
Police departments across the country are piloting AI transcription tools that generate reports from bodycam footage or officer narration. These software systems raise big, foundational questions that can be a good place to start reporting.
Consider:
How does AI handle contradiction or ambiguity — common in cases of police violence? And what happens when the system “hallucinates” (as it is wont to do)?
Who, if anyone, is auditing the algorithm’s output? Are edits made to the AI transcription traceable?
License Plate Readers
We’ve mentioned ALPRs — Automated License Plate Readers — in previous issues, but we now know a lot more about how they’re being used, thanks to some excellent reporting. (Before we get into it, we should note that these cameras capture a whole lot more than just license plates — with powerful object recognition and backend data analytics, license plate readers are just license plate readers in the same way that an iPhone is just a telephone.) According to a new piece from 404 Media, ICE is now tapping into this nationwide network of roadside surveillance, and local police have started to expand its use. One Texas police officer used the nationwide Flock network to try and hunt down a woman who had just had an abortion. As 404’s two bombshell stories last week show, there’s plenty of potential for local and regional stories here.
Consider:
Are there more of these cameras in predominantly Black and brown neighborhoods? Consider mapping the ALPRs in your city and cross-referencing it with Census data.
What is your police department’s policy on what to do when they get a ping on a “hot listed” car? Do they do a verification process to ensure it’s the right car? How long do they hold onto a plate photo for?
Try asking about misreads – one randomized study out of California found that 37% of fixed ALPR hits were misreads.
Facial Recognition Software
Police departments — and increasingly housing authorities, schools, and transit agencies — are using facial recognition software, sometimes without public disclosure. A recent Washington Post investigation by Douglas MacMillan and Aaron Schaffer revealed that New Orleans police were using widespread facial recognition technology to aid in arrests — a first for a major U.S. city. The Post’s investigation raised a few interesting points that might have applicability elsewhere: 1) The wide surveillance network was run and paid for by a private nonprofit and 2) People arrested as a result of the surveillance network weren’t told that they had been found by AI, denying them the chance to contest the use in court.
Vendors for this tech include Clearview AI (which scraped billions of images without consent), Amazon’s Rekognition, Verkada, and Motorola. Among others.
Consider:
Facial recognition tech is famously bad at identifying faces that are not white and not male. It struggles to differentiate between Black faces, it’s especially bad at accurately recognizing Black women, and accuracy decreases overall as people age.
There are a lot of partnerships between public agencies like police and private nonprofits, like Project NOLA, who provided the surveillance network in New Orleans. This can make it harder–but not impossible– to get records.
Constant surveillance makes the 4th Amendment ever murkier – consider talking to public defenders about how they’re litigating these kinds of cases.
Expand your scope beyond just police: look into schools, housing agencies, or public transit systems.
Algorithmic Risk Scores in the Criminal Legal System
Many judges and parole boards use tools like COMPAS (the very Orwellian-sounding Correctional Offender Management Profiling for Alternative Sanctions) and other algorithmic risk tools to allegedly assess a person’s risk of recidivism "neutrally" — but studies show they often just end up reinforcing racial and class bias. Worse, the math behind most of these tools is proprietary, so defendants can't challenge it.
Consider:
What data was used to train the algorithm, and what assumptions are baked into its design?
How often do judges or parole boards overrule the risk tools’ recommendations?
Public defenders are probably closest to this problem in a lot of ways – what are they seeing show up in pretrial risk assessments? How has this changed, if at all, over the past few years?
The Next Frontier: AI in Prisons and Jails
The use of AI is quietly expanding inside jails and prisons — scanning phone calls, emails, and even physical letters. There’s also the sort of chilling prospect of robots replacing human workers (mostly guards) in prisons and jails – a program piloted by a metro-Atlanta jail last fall.
Vendors often include GTL/ViaPath, Securus, Leo Technologies, and even Amazon Web Services.
Consider:
How the introduction of AI/automation affects conditions inside. Does it lead to more dehumanization or abuse? Do robot guards mean prisons and jails need less public money? How does it impact the working conditions for human guards?
To what extent do incarcerated people know they’re being subjected to various forms of AI surveillance?
If you’re digging through any of these technologies, here are some good places to start looking for records:
Look for contracts, MOUs, training manuals, and policies used by your local police for things like automated police reports, facial recognition tech, ALPRs, etc. For ALPR vendors specifically, look for contracts between police and fusion centers. Also look for contracts/communications between these vendors and ICE or DHS.
Procurement databases for relevant vendors; procurement documents and contracts.
Internal communications about pilot programs or deployments of any of these programs.
Local policies (or lack thereof?) on data sharing and retention.
If your open records requests hit a wall, try alternative routes: audits, legislative briefings, shareholder reports — or collaborate with transparency orgs who’ve fought similar fights. And as always, get first-hand reports from incarcerated people (or their families) about changes in surveillance.
There’s resistance to all of this, too. Some cities and states — including Hannah’s home state of Vermont! — have banned facial recognition tech in policing. Others are demanding algorithmic transparency or passing local laws to govern AI use. (This might be short-lived: a controversial clause of Trump’s “Big Beautiful Bill” is a 10-year moratorium on state and local regulation of AI systems, including facial recognition and automated decision making.) Talk to organizers, technologists (shout out again to EFF) and civil liberties lawyers, vocal policymakers and watchdog groups, and community members fighting for transparency.
A few final things:
FWD.us just came out with a new accounting of the costs that families bear when their loved ones are incarcerated. We Can’t Afford It: Mass Incarceration and the Family Tax captures the financial toll prisons and jails exact on American families, from bail and commissary to phone calls and lost wages. Lots of jumping off points for reporting here:
You could profile impacted families, showing how the extra $4,200/year spent on commissary and calls compares to other expenses like rent or school supplies or childcare.
Explore how 1 in 5 families must move due to incarceration, costing $2,360 per move. Maybe map local eviction and displacement trends tied to incarceration—what’s the ripple effect on neighborhoods?
Investigate local labor impacts: do arrests and jailings drive family members out of the workforce or intensify gig economy reliance?
And a few great things we’ve read recently:
This New York Times piece does a great job of highlighting the downstream impacts of aggressive immigration policies – undocumented people are skipping necessary medical care due to fears of ICE.
The Guardian lays out what’s at risk for Eastern Appalachia as flooding becomes more severe and more frequent, and how people are trying to mitigate the risks.
The Atlanta Journal Constitution covers two lawsuits against the infamous Georgia Board of Pardons and Paroles, filed by people given very long sentences when they were teenagers. The board (which is a notorious black box) is ignoring Supreme Court precedent that requires people sentenced as kids to be treated differently than those sentenced as adults, continually denying – with no real explanation as to why – their chance for a meaningful consideration of parole.
The Marshall Project has a great state of the state on how new AI tech is surveilling us all, how it’s coming up in courts, etc.
One last thing – if the intersection of AI and the criminal legal system continues to intrigue you and you’re going to be at the IRE conference in New Orleans next week, here’s a shameless plug for a panel Hannah will be moderating: Policing by Algorithm will be on Saturday (the 20th) at 4:15, and it’ll feature Douglas MacMillan (who co-wrote the WaPo investigation on the surveillance network in New Orleans), Jamiles Lartey (whose excellent Marshall Project piece is linked above) and EFF’s Beryl Lipton (clearly we’re big fans of this organization).
Until next time! And don’t forget to send us tips, thoughts, ideas, and feedback!