Around a month ago, a young Black male high school student was arrested by the Baltimore County police in Maryland for carrying a concealed gun. Only that the so-called gun was actually a bag of Doritos, which the AI weapon detection system of the boy’s school mistook for a firearm. Fortunately, things were resolved, and the innocent boy was released, but not after he was traumatized, of course, with a group of armed police swarming him, guns pointed. Now, a month later, state officials claim that the AI weapon detection system, guilty of the mistake, is not racially biased against skin color, aka racist.
According to WMAR 2, the Maryland Office of the Inspector General for Education states that no evidence has been found of the AI system being “racist.” Of course, the victim’s family would beg to differ, seeing how recognizing a bag of chips as a weapon is a pretty wild jump, not to mention considering the student’s skin color. In a statement by Baltimore County Public Schools, the board plans to stick to its “multi-tiered approach for managing school safety.” But the office does promise that it will “adopt the report’s recommendations.”
Seeing how the government and companies are abusing AI use to the absolute max these days, many people are torn about the concept of AI being used for everything, especially with weapon detection in schools. So, it’s not hard to see why plenty of online users would oppose the Maryland state officials’ stance on the school’s AI not being racially biased. However, there is one very important detail that people might not have acknowledged about this incident.
After the AI thought the student’s chip bag was a gun, “the school district’s security department reviewed and canceled the gun detection alert.” However, the principal still called the police on the child, claiming she didn’t know the alert had been cancelled.
“So the AI and security team weren’t racist but the principal definitely was,” remarks a Redditor. “Sending a bunch of cops to publicly arrest a kid based solely on the findings of AI is high stupidity. People are becoming way too dependent on these programs,” adds another. Of course, there’s no proof of the principal actually being racially biased, but it’s easy to see why anyone would guess that.







