AI is increasingly being used in American policing, but the use of artificial intelligence (AI) tools by law enforcement has raised serious concerns about their reliability and potential misuse. Critics argue that AI facial-recognition tools are often inaccurate, leading to false leads and wrongful arrests, particularly targeting people of color.
The reliance on AI also perpetuates existing biases in policing, with systemic over-policing and containment of working-class Black and brown communities being further entrenched. Humans may defer to AI's calculating nature, ignoring the reality that AI knowledge learns from the past to predict the future, which can amplify unethical practices already prevalent in policing.
"Large language model vendors are never going to talk about what can go wrong," says Graham Lovelace, a journalist and writer on AI technology. "They never discuss hallucinations or give health warnings to the public. It's not in their self-interest." However, AI tools can be highly unreliable and cause harm, with many predictions generated by these tools being inaccurate.
ShotSpotter technology, used by police departments across the US, has been criticized for its high false positive rate, with some studies showing that up to 90% of alerts do not match real gun violence. Despite this, many police departments continue to use AI-powered surveillance tools, often citing their effectiveness in reducing crime and saving lives.
However, critics argue that these tools are used to justify the deployment of more officers in already militarized areas, particularly in low-income communities of color. This can lead to a perpetuation of existing power structures, with AI serving as an automator of existing hierarchies rather than questioning or correcting them.
The use of AI surveillance regimes also raises concerns about privacy and due process rights. Many police departments are opaque about their contracts with private companies, making it difficult for the public to understand how these tools work and what data they collect. The creation of "black boxes" around AI systems makes it challenging for citizens to hold corporations accountable for their actions.
"The companies can essentially use them [AI] to train their data for other sales that they want to make down the line," says Andrew Guthrie Ferguson, a law professor at George Washington University. This raises concerns about the exploitation of public trust and the prioritization of corporate interests over public safety.
Ultimately, critics argue that AI surveillance tools are not a solution to complex social problems like policing and crime. Instead, they cannibalize resources for more effective solutions, such as healthcare, affordable housing, education, and community development. As one critic notes, "the very idea that complex social problems can be solved by advanced technology is a false, expensive promise."
The reliance on AI also perpetuates existing biases in policing, with systemic over-policing and containment of working-class Black and brown communities being further entrenched. Humans may defer to AI's calculating nature, ignoring the reality that AI knowledge learns from the past to predict the future, which can amplify unethical practices already prevalent in policing.
"Large language model vendors are never going to talk about what can go wrong," says Graham Lovelace, a journalist and writer on AI technology. "They never discuss hallucinations or give health warnings to the public. It's not in their self-interest." However, AI tools can be highly unreliable and cause harm, with many predictions generated by these tools being inaccurate.
ShotSpotter technology, used by police departments across the US, has been criticized for its high false positive rate, with some studies showing that up to 90% of alerts do not match real gun violence. Despite this, many police departments continue to use AI-powered surveillance tools, often citing their effectiveness in reducing crime and saving lives.
However, critics argue that these tools are used to justify the deployment of more officers in already militarized areas, particularly in low-income communities of color. This can lead to a perpetuation of existing power structures, with AI serving as an automator of existing hierarchies rather than questioning or correcting them.
The use of AI surveillance regimes also raises concerns about privacy and due process rights. Many police departments are opaque about their contracts with private companies, making it difficult for the public to understand how these tools work and what data they collect. The creation of "black boxes" around AI systems makes it challenging for citizens to hold corporations accountable for their actions.
"The companies can essentially use them [AI] to train their data for other sales that they want to make down the line," says Andrew Guthrie Ferguson, a law professor at George Washington University. This raises concerns about the exploitation of public trust and the prioritization of corporate interests over public safety.
Ultimately, critics argue that AI surveillance tools are not a solution to complex social problems like policing and crime. Instead, they cannibalize resources for more effective solutions, such as healthcare, affordable housing, education, and community development. As one critic notes, "the very idea that complex social problems can be solved by advanced technology is a false, expensive promise."