Artificial city

Police use of artificial intelligence: 2021 in review

Decades ago, imagining the practical uses of artificial intelligence, science fiction writers envisioned autonomous digital minds that could serve humanity. Of course, sometimes a HAL 9000 Where WOPR would reverse expectations and become a thug, but that was quite unintentional, right?

And for many aspects of life, artificial intelligence keeps its promises. AI is, as we speak, looking for evidence of life on Mars. Scientists use AI to try to develop more precise and faster means of forecasting the weather.

But when it comes to policing, the reality of the situation is much less optimistic. Our HAL 9000 does not assert its own decisions about the world – instead, programs that claim to use AI for the police reaffirm, justify and legitimize the opinions and actions already taken by the police services.

AI poses two problems, washing-technology, and a classic feedback loop. Technological washing is the process by which supporters of the results can defend these results as unbiased because they are derived from “math.” And the feedback loop is how that math continues to perpetuate historically entrenched nefarious results. “The problem with using algorithms based on machine learning is that if these automated systems are fed with examples of biased justice, they will end up perpetuating those same biases,” one said. note from the philosopher of science.

All too often, artificial intelligence in the police is powered by data collected by the police and therefore can only predict crime from data from the neighborhoods the police already control. But the crime data is notoriously inaccurate, Thus, police AI not only misses the crime that occurs in other neighborhoods, but reinforces the idea that the neighborhoods where they are already over-policed ​​are exactly the neighborhoods to which the police are right to direct them. patrols and surveillance.

How AI technology washes away unfair data created by an unfair criminal justice system is becoming increasingly evident.

In 2021, we got a better idea of ​​what “data-driven policing” really means. An investigation by Gizmodo and The Markup showed that the software that put PredPol, now called Geolitica, on the map disproportionately predicted this crime will be committed in neighborhoods inhabited by working class people, people of color and blacks in particular. You can read here on the technical and statistical analysis they performed to show how these algorithms perpetuate racial disparities in the criminal justice system.

Gizmodo Reports that “for the 11 departments that provided data on arrests, we found that the arrest rates in the planned areas remained the same whether PredPol predicted a crime on that day or not.” In other words, we did not find a strong correlation between arrests and predictions. This is precisely why so-called predictive policing or any other data-driven policing system should not be used. Police patrol neighborhoods inhabited primarily by people of color, which means these are the places where they make arrests and write citations. The algorithm takes these arrests into account and determines that these areas are likely to witness crime in the future, thus justifying a strong police presence in black neighborhoods. And so the cycle continues again.

This can happen with other technologies that rely on artificial intelligence, such as the acoustic detection of gunshots, which can send false positive alerts to the police signifying the presence of gunfire.

This year, we also learned that at least one so-called artificial intelligence company that has received millions of dollars and untold amounts of government data from the state of Utah could not deliver on their promises to help direct law enforcement and public services to problem areas.

This is precisely why a number of cities, including Santa Cruz and New Orleans have banned the government’s use of predictive policing programs. As mayor of Santa Cruz said at the time, “If we have racial bias in policing, it means that the data that goes into these algorithms is already inherently biased and will have biased results, so it doesn’t make sense to try to use the technology when the likelihood is going to have a negative impact on communities of color is obvious.


Next year, the fight against irresponsible police use of artificial intelligence and machine learning will continue. EFF will continue to support local and state governments in their fight against so-called predictive or data-driven policing.

This article is part of our Year in Review series. Read more articles on the fight for digital rights in 2021.