close
close

Which AI-powered predictive police needs: accountability obligation

The science fiction thriller “Minority Report” from 2002 showed a dystopian future in which a specialized police unit was commissioned to arrest people for crimes they had not yet committed. Under the direction of Steven Spielberg and based on a short story of Philip K. Dick, the drama revolved around “Precrime”-a system that was informed by a trio of clairvoyants or “advantages”, which awaited future murders and enabled police officers to intervene and to prevent possible attacks from claiming their goal of the goal.

The film probes on violent ethical questions: How can someone be guilty of crime that he has not yet committed? And what happens if the system understands it wrong?

While there is no all -available precogging, key components of the future, which has imagined “minority report”, have become even faster than its creators. For more than a decade, police stations around the world have been using data -controlled systems that are designed to predict when and where crimes could occur.


Forecast is far from abstract or futuristic imagination. And market analysts predict a boom for technology.

In view of the challenges in the effective and fair use of predictive machine learning, the predictive police raise significant ethical concerns. There is a lack of technological corrections on the horizon to deal with these concerns: Treat the state use of technology from the democratic accountability obligation.

The predictive police are based on artificial intelligence and data analysis to anticipate potential criminal activities before it occurs. It can include the analysis of large data records from crime reports, arrest of records and social or geographical information in order to identify patterns and prognosis in which crimes can occur or who may be involved.

Criminal prosecution authorities have used data analyzes for many decades to pursue broad trends. However, today's powerful AI technologies take up large amounts of surveillance and crime report data in order to enable a much finer analysis.

Police departments use these techniques to determine where they should concentrate their resources. Local forecast focuses on the identification of high -risk tops, which are also referred to as “hot spots”, on which statistically appears more likely. Personal -based prediction, on the other hand, attempts to mark people who have a high risk of committing crime or becoming a victim of crime.

These types of systems were the subject of considerable public concerns. As part of a so -called “intelligence -led police” program in Pasco County, Florida, the sheriff department put together a list of people who commit crimes and then repeatedly sent the MPs to their homes. More than 1,000 Pasco inhabitants, including minors, were suspended by police officers and were quoted for things such as missing mailbox numbers and overgrown grass.

Four inhabitants sued the district in 2021 and reached an agreement last year in which the Sheriff's office admitted that it had violated the constitutional rights of the residents of privacy and equal treatment in accordance with the law. The program has been discontinued since then.

This is not just a problem in Florida. In 2020, Chicago deployed his “strategic specialist list”, in which the police used analyzes to predict which former criminals commit new crimes or became victims of future shootings. In 2021, the Los Angeles Police Department hired the use of Predpol, a software program for the prediction of Crime Hot Spots, which was criticized for low accuracy rates and the strengthening of racist and socio -economic prejudices.

Necessary innovations or dangerous excess?

The failure of these top-class programs shows a critical tension: Although law enforcement agencies often work for AI-controlled instruments for public security, civil rights groups and scientists have expressed concerns about the violations of data protection, accountability and the lack of transparency. And despite these top -class retreats from the predictive police, many smaller police stations use the technology.

In most American police stations, there are no clear guidelines on algorithmic decision -making and hardly provide disclosure of how the predictive models they use are developed, trained or monitored. An analysis of the Brookings Institution showed that in many cities the local governments had no public documentation about how the predictive police software worked, which data was used or how the results were rated.

This opacity is what is known in the industry as a “Black Box”. It prevents an independent supervision and raises serious questions about the structures in connection with the decision -making of AI. If a citizen of a algorithm is characterized as a high risk as a high risk, what recourse do you have? Who monitors the fairness of these systems? Which independent supervisory mechanisms are available?

These questions lead to controversial debates in the communities as to whether the predictive police should be reformed, more closely regulated or completely abandoned as a method. Some people consider these tools as necessary innovations, while others exceed them as dangerous.

A better way in San Jose

However, there is indications that data -controlled instruments that are based on democratic values ​​of the proper procedure, transparency and accountability can offer a stronger alternative to today's prediction police systems. What if the public could understand how these algorithms work, on which data you rely on, and what protective measures do you insist to prevent discriminatory results and the misuse of the technology?

The city of San Jose, California, has launched a process that is intended to increase the transparency and accountability obligation with regard to the use of AI systems. San Jose maintains a number of AI principles according to which all AI tools used by the city government are effective, transparent to the public and in their effects on people's lives. The city departments also have to evaluate the risks of AI systems before they are integrated into their operation.

If these measures are taken correctly, these measures can effectively open the Black Box and the extent to which AI companies can hide their code or data behind things such as protection for business secrets. The enabling of the public examination of training data can show problems such as racist or economic distortions that can be reduced, but are extremely difficult, if not impossible.

Studies have shown that the citizens believe that government institutions act quite and transparently, rather participate in a bourgeois life and support public politics. Law enforcement authorities probably have stronger results if they treat technology as an instrument and not as a replacement for justice.

Maria Lungu is a postdoctoral to the right and public administration at the University of Virginia. This comment is released The conversation Under a creative commons license. To read The original article.


GoverningThe opinion columns reflect the views of their authors and not necessarily those of those of Governing'S editor or management.

Leave a Comment