Body cameras are always a good idea for police because they make officers transparent and hold them accountable. The evidence seen on body cams, when in use, can be used in court settings and plays a vital role. However, with the emergence of AI, you have to be delicately careful with how this technology is used. AI is full of flaws and faults at this time. This is why people are calling it “dangerous” that an Arizona company wants to push AI technology that police would use to scan your face.
A video outlining the technology was shared on r/TikTokCringe. According to the TikToker, police forces are currently testing body cameras that scan your face. When you walk past an officer, it would scan your face to see if you have an active warrant. This could easily lead to an arrest. And, if used improperly, a wrongful arrest. The Arizona company, Axon, said itself in recent years that the technology was too dangerous. Their ethics board cancelled initial plans to use the AI body scans but now they’re possibly headed to market once again.
If you come up with a “safety flag” through the body cam system, you could be questioned or even detained by police. Though this may sound like a useful tool for a police force, it has astounding privacy concerns. Especially when deployed by police, there would be few protections for everyday individuals who are recognized by an AI system like this. Because Axon is the “biggest supplier of body cameras for police departments” in America, we have a lot to fear.
Faulty cameras or AI systems could lead to false accusations and arrests. This closely resembles a recent incident in Denver, where a woman was accused of stealing a package. The false accusation led to a court summons when police accessed a Flock surveillance camera and claimed they saw the woman stealing. Later, they reviewed evidence and found that the camera produced false information. She didn’t actually steal the package and, in the end, her summons was voided. More and more events like this could happen when technology uses untrustworthy AI systems or lands in the hands of sketchy cops.
One commenter pointed out how this could go horribly wrong. They said, “Gotten a parking ticket before? Safety flag! Not white? Safety flag!! Walking while looking at the cops wrong? Safety flags for everyone!” Another person commented, “Knowing how ‘reliable’ tech like this is… I would almost always assume a flag is a false positive. I’m sure that’s how it’ll be handled… [Expletive] like this is one of the reasons why I default to just staying the [expletive] inside anymore. I’m sure we’ll invent an issue with that pretty soon as well, though.”







