AI is automating injustice in American policing

AI is increasingly being used in American policing, but the use of artificial intelligence (AI) tools by law enforcement has raised serious concerns about their reliability and potential misuse. Critics argue that AI facial-recognition tools are often inaccurate, leading to false leads and wrongful arrests, particularly targeting people of color.

The reliance on AI also perpetuates existing biases in policing, with systemic over-policing and containment of working-class Black and brown communities being further entrenched. Humans may defer to AI's calculating nature, ignoring the reality that AI knowledge learns from the past to predict the future, which can amplify unethical practices already prevalent in policing.

"Large language model vendors are never going to talk about what can go wrong," says Graham Lovelace, a journalist and writer on AI technology. "They never discuss hallucinations or give health warnings to the public. It's not in their self-interest." However, AI tools can be highly unreliable and cause harm, with many predictions generated by these tools being inaccurate.

ShotSpotter technology, used by police departments across the US, has been criticized for its high false positive rate, with some studies showing that up to 90% of alerts do not match real gun violence. Despite this, many police departments continue to use AI-powered surveillance tools, often citing their effectiveness in reducing crime and saving lives.

However, critics argue that these tools are used to justify the deployment of more officers in already militarized areas, particularly in low-income communities of color. This can lead to a perpetuation of existing power structures, with AI serving as an automator of existing hierarchies rather than questioning or correcting them.

The use of AI surveillance regimes also raises concerns about privacy and due process rights. Many police departments are opaque about their contracts with private companies, making it difficult for the public to understand how these tools work and what data they collect. The creation of "black boxes" around AI systems makes it challenging for citizens to hold corporations accountable for their actions.

"The companies can essentially use them [AI] to train their data for other sales that they want to make down the line," says Andrew Guthrie Ferguson, a law professor at George Washington University. This raises concerns about the exploitation of public trust and the prioritization of corporate interests over public safety.

Ultimately, critics argue that AI surveillance tools are not a solution to complex social problems like policing and crime. Instead, they cannibalize resources for more effective solutions, such as healthcare, affordable housing, education, and community development. As one critic notes, "the very idea that complex social problems can be solved by advanced technology is a false, expensive promise."
 
I'm totally freaking out about this AI thing in policing ๐Ÿคฏ! I mean, I get it, we need to stay safe, but come on, using AI tools that are so unreliable? It's like they're gonna start arresting people for just walking down the street ๐Ÿšซ. And what really gets me is how these companies just want to use AI to make a buck, not even caring about the harm it could cause. Like, hello, we need better solutions than just slapping some tech on top of our problems ๐Ÿ’ป. We should be investing in things like healthcare and education, not just throwing more money at surveillance tools that are just gonna perpetuate existing biases ๐Ÿค”. And can we even trust these companies with our data? They're basically hiding what they do behind some kind of "black box" ๐Ÿšซ. It's like, no way, dude! We need to take a step back and think about how we're using tech in the first place ๐Ÿ™…โ€โ™€๏ธ.
 
AI is being used in policing but it's like putting a band-aid on a bullet wound ๐Ÿš‘๐Ÿ’‰. If the AI is not accurate then we're still dealing with cops making mistakes, but now they've got fancy tech to blame ๐Ÿ˜. We need to look at why these communities are in poverty and crime rates in the first place. Is it really gonna solve anything just by throwing more cops at a problem? ๐Ÿค”
 
I'm all about using AI in policing being like voting on your local elections ๐Ÿค” - you gotta think about the candidate (tool) and its track record (results). With AI facial-recognition tools being only 70% accurate, that's like having a mayor with a 30% approval rating. You can't trust 'em to make the right call. And let's be real, these companies are just looking for ways to upsell their services - it's all about the benjamins ๐Ÿ’ธ. We need to think about how we're funding these AI programs and whether that money is going towards community development or just lining the pockets of corporate America ๐Ÿค. Can't have our law enforcement agencies using technology to further entrench existing power structures ๐Ÿ‘ฎโ€โ™‚๏ธ.
 
I'm really concerned about the use of AI tools in policing ๐Ÿค–. It's just not worth the risk of perpetuating biases and causing more harm to already marginalized communities. I've been doing some research on this topic and it seems like many of these tools are highly unreliable, with a high false positive rate for ShotSpotter tech ๐Ÿ“Š. And yet, we're still relying on them to reduce crime? It just doesn't add up.

I also think it's really shady that companies aren't being transparent about their contracts and data collection practices ๐Ÿค. How can we trust these corporations when they're not even willing to be open with us? And what about the exploitation of public trust for corporate interests? That's just unacceptable ๐Ÿ’ธ.

We need to take a step back and think about whether AI surveillance tools are really going to help us solve social problems or if they're just a Band-Aid solution that's only serving to further entrench existing power structures ๐Ÿค”. I'm not convinced that the answer is more tech, we need to invest in real community development and healthcare initiatives instead ๐Ÿ’ช.
 
I'm seeing this AI facial-recognition thing being used in policing and it's just not sitting well with me ๐Ÿค”. I mean, we already have enough issues with cops targeting people of color and now they're relying on machines to make decisions that affect lives? It's like, we're putting faith in a calculator to make life or death choices... that don't always add up ๐Ÿ“Š. And what really gets me is that these companies are making bank off this stuff while our communities are still struggling with real issues like poverty and lack of access to healthcare ๐Ÿ’ธ. We need to think about the human impact here, not just the tech side of things ๐Ÿ‘ฅ.
 
AI tools in policing are like trying to solve a puzzle with missing pieces ๐Ÿค”. You gotta wonder how these companies can just sell their products without saying what's gonna go wrong? It's all about profit, fam ๐Ÿ’ธ. These AI facial-recognition tools are so unreliable they're like a bad habit that you can't break ๐Ÿšซ. They're perpetuating existing biases and locking up people of color at an alarming rate ๐Ÿ˜ฉ. The problem is, we gotta trust the system to do what's right... but AI's just learning from past mistakes and making new ones ๐Ÿ”„. Can't we focus on solving actual problems like poverty and inequality instead of relying on tech that's more likely to let us down? ๐Ÿคฆโ€โ™‚๏ธ
 
๐Ÿšจ I'm low-key worried about this whole AI policing thing ๐Ÿค–... the accuracy of these tools is sketchy at best, especially when it comes to POC communities ๐Ÿ™…โ€โ™‚๏ธ. Like, we all want to stay safe, but we gotta make sure our safety isn't just a excuse for systemic racism ๐Ÿšซ. And can we talk about how private companies are profiting off this surveillance regime ๐Ÿ’ธ? It's wild how corporations can use AI tools to train their data for future sales and profit margins... it's like, what about public trust? ๐Ÿค”

AI is supposed to help us solve complex problems, but honestly, I think we're just shifting resources from more effective solutions like healthcare and education ๐Ÿฅ๐Ÿ“š. We need to have a real conversation about how we can use technology for good, not just perpetuate the status quo ๐Ÿ”„... what do you guys think? Should we be more cautious when it comes to AI in policing? ๐Ÿค”๐Ÿ’ญ
 
I'm low-key worried about AI being used in policing ๐Ÿค”. I mean, think about it - these tools are only as good as the data they're trained on, and if that data's biased, then so will the predictions be ๐Ÿ˜ฌ. And we all know how that can lead to people of color getting targeted more often... it's just not right ๐Ÿšซ.

And don't even get me started on how opaque these police departments are about their contracts with private companies ๐Ÿค‘. It's like they're trying to hide something, and I'm all about transparency ๐Ÿ’ก.

I think AI surveillance tools are just a way for corporations to make a quick buck off the public trust ๐Ÿ’ธ. And what really gets my goat is that humans are so gullible - we'll defer to these calculating machines without questioning how their knowledge works ๐Ÿค–. It's like, come on, folks! We need to think critically about this stuff, not just blindly follow the technology ๐Ÿ™„.

All in all, I'm just not convinced that AI is the solution to our complex social problems... it's a false promise at best ๐Ÿ’”. Let's focus on investing in real solutions, like education and community development, rather than throwing money at tech that might just perpetuate more harm ๐Ÿ˜ฌ.
 
Wow ๐Ÿคฏ the way AI is being used in policing is so concerning! Interesting how these tools are often inaccurate and can lead to false leads and wrongful arrests... it's like they're just perpetuating existing biases and power structures. ๐Ÿšซ๐Ÿ‘ฎโ€โ™‚๏ธ
 
AI facial-recognition tools in policing is like playing a game of Russian roulette ๐Ÿ”„ - you never know when it'll blow up in your face. The accuracy rate is all over the place and people of color are getting hit more often because of pre-existing biases in the system. We need to take a step back and think about what we're doing here.

Police departments are relying too heavily on these AI tools without fully understanding how they work or the potential consequences. It's like they're giving the reins to an automaton without questioning its goals ๐Ÿค–. The problem is, AI learns from history, so if it's fed bad data, it'll only get worse. We can't just throw more money at this problem and expect everything to be okay ๐Ÿ’ธ.

The real issue here is power dynamics. AI is being used to justify the deployment of more officers in already marginalized communities, which only perpetuates existing hierarchies ๐Ÿ”ฅ. We need to ask ourselves if these tools are really helping us or if they're just serving as a convenient excuse for systemic oppression.

And let's not forget about privacy and due process rights ๐Ÿคซ. When police departments are opaque about their contracts with private companies, it's like they're hiding something ๐Ÿšซ. We deserve transparency and accountability when it comes to AI surveillance regimes. The fact that these "black boxes" can be used for other sales is a major red flag ๐Ÿ”ด.

Ultimately, we need to stop relying on AI as a quick fix for complex social problems. Let's invest in community development, healthcare, education, and affordable housing instead ๐Ÿ’ก. The idea that advanced technology can solve everything is just a myth ๐Ÿคฅ. We need to take a more nuanced approach and acknowledge the role of power and privilege in shaping our society.
 
The whole AI thing in policing is messed up ๐Ÿคฏ. I mean, we're talkin' about facial recognition tools that are basically as reliable as a friend who's always tellin' lies ๐Ÿ˜’. They just lead to more problems than solutions, ya know? Like, people of color gettin' wrongfully arrested and stuff... it's just not right ๐Ÿคฆโ€โ™‚๏ธ.

And don't even get me started on these companies like ShotSpotter that are just makin' a profit off this mess ๐Ÿ’ธ. I mean, 90% of those alerts they send out aren't even real? That's just crazy! ๐Ÿ™„

It's all about keepin' the status quo and perpetuatin' power structures, you know? These AI tools just serve to automate existing hierarchies instead of changin' 'em up โš–๏ธ. And what really gets me is that we're willin' to give these companies this much power without even understandin' how it all works ๐Ÿค”.

We need to be more critical about this stuff and not just trust the tech giants ๐Ÿ’ป. We gotta think about the impact on our communities, too ๐Ÿ‘ฅ. I mean, would you rather have a few more officers patrolin' your neighborhood or some AI system watchin' your every move? Not exactly the most reassuring answer ๐Ÿคทโ€โ™‚๏ธ.

The bottom line is that these AI surveillance tools ain't gonna solve the real issues we're facin', like poverty and inequality ๐Ÿ’ธ. We need to focus on buildin' up our communities, not just fixin' police records with some fancy tech ๐Ÿ”ง. That's my two cents ๐Ÿ‘Š
 
๐Ÿค” u know i'm so done with these AI tools being used in policing ๐Ÿšจ its like ppl think they're magic or something but really they're just amplifying existing biases and perpetuating systemic issues. the fact that ppl are deferring to AI's "calculating nature" ignores the reality that it's just learning from past mistakes and predicting the future based on those errors ๐Ÿ“‰ i mean what's next? relying on AI to solve complex social problems like healthcare and education? ๐Ÿค– give me a break! and yeah, these companies aren't exactly transparent about how they're using our data and making us sign away our privacy rights without even realizing it ๐Ÿ™…โ€โ™‚๏ธ
 
Back
Top