Dispatch or not dispatch
Surveillance technology makes mistakes – do we want it to be perfect?
Happy Friday, Control, Spy Delete readers! I’m Anna Baydakova and I’m here with a new batch of digital surveillance news for you. Hope you’re getting in a festive mood and haven’t succumbed to a cold like I did last week – gotta stay warm in these dark times, don’t we?
This week got me thinking about the mistakes technology makes and the level of power we’re placing in its hands – a theme that feels especially acute as we roll towards a world monitored and guided by AI.
What happened? A school AI-powered surveillance system again called the cops on an innocent student, sending teens into panic and resulting in just another grim anecdote.
Is it possible to create a perfect surveillance system – one that makes no mistakes and efficiently prevents violence? We may edge closer as technology develops exponentially. But what would a system like that leave of our privacy? Now that’s a trickier question.
In the meantime, we’re keeping tabs in the twists and turns of AI surveillance revolution here in the U.S. – share this newsletter with privacy-minded friends if you’re enjoying it!
Let’s get into it.
Subscribe to stay aware!
Biometrics briefing
Online identification provider iProov demonstrated that it’s possible to falsify a facial recognition verification on an Android device by swapping a live camera feed with a synthetic video. – ID Tech Wire
Australian casinos use facial recognition technology to monitor clients. – The Saturday Paper
Indonesia is introducing facial recognition requirements for new SIM card registrations starting July 1, 2026 – Tech In Asia.
AI sees fake guns again
Remember when a school AI surveillance system mistook a bag of Doritos for a gun and dispatched cops to handcuff the student? Back in October, students of Kenwood High School in Baltimore saw what happens when an AI-powered surveillance system produces a false alarm.
This week, a similar story happened at Lawton Chiles Middle School in Florida. Police arrived looking for “a man in the building, dressed in camouflage with a ‘suspected weapon pointed down the hallway, being held in the position of a shouldered rifle’,” The Washington Post reported.
However, after officers double-checked the images that triggered the alarm, they realized “the suspected rifle might have been a band instrument.” The student suspected of carrying a gun was actually holding a clarinet and was dressed as a military character from the Christmas movie Red One – it was a Christmas-themed dress-up day at the school. The “culprit” was hiding in the band room with other frightened students when police arrived.
A co-founder of ZeroEyes, the firm that provided the AI surveillance tool for the school, told reporters that the student was holding his musical instrument like a rifle, so it was “better to dispatch [police] than not dispatch.”
ZeroEyes is working with schools in 48 states, ArsTechnica writes. The firm has been equipping U.S. schools with AI surveillance systems since 2023, and has already sent its clients into panic over false alarms: Brazoswood High School went into lockdown in October 2023 when ZeroEyes hallucinated an assault rifle in a student’s arms. To be fair, it’s a small percentage of over 1,000 guns ZeroEyes says it has detected in schools since 2023, and at least some of those cases may have prevented school shootings.
Some schools use other gun-detecting tools like Omnilert (it was used in Kenwood High School, where the student was handcuffed over a Doritos bag) and VOLT AI. Schools also use AI tools for monitoring students’ browsing activity on school-owned computers, like Lightspeed, Education Week reported earlier this year. There are also audio devices schools deploy in bathrooms, which are trained to detect sounds of distress or cries for help.
However, advanced surveillance still can’t prevent 100% of tragedies: the Antioch High School in Nashville had Omnilert installed, but still lost a student to a school shooting on January 20. Omnilert CEO Dave Fraser told CNN that “the gun could not be detected because it was not visible.”
This combination of fear-inducing false alarms and inefficiency in situations of genuine threat is a problem of any AI surveillance system, not just in schools. When data is interpreted in real time and the stakes are high, mistakes are inevitable, especially given the general imperfections of AI systems.
Another problem is cybersecurity: if a surveillance system of this sort gets hacked, a wealth of sensitive data about school kids can end up in the wrong hands – with unpredictable consequences. Earlier this year, a Portland high school student hacked a Halo listening device in his school and turned it into a real-time eavesdropping bug, Wired reported. Motorolla, the producer of Halo, later said it issued security updates to patch vulnerabilities.
The balance between safety and privacy is especially delicate when it comes to children. Turning schools into places of constant, omnipresent monitoring is an issue that deserves thorough discussion with parents, teachers, and other involved adults. This is rarely the case; often, students and parents do not realize the full scope of surveillance occurring in their schools, Forbes writes.
ICE robot on duty
Have I told you the future of surveillance is AI-powered systems watching our every step? (I have.) Well, the U.S. has made yet another step towards automated surveillance, starting with immigrants.
Immigration and Custom Enforcement (ICE) has contracted a firm that manufactures AI surveillance agents, 404 Media discovered. The company, named AI Solutions 87, says its agents “deliver rapid acceleration in finding persons of interest and mapping their entire network.” The firm received $636,500 from ICE for helping the agency locate undocumented immigrants, according to public procurement records.
ICE has been outsourcing this kind of work to private companies since the fall: in October, the agency started looking for skip tracers – private detectives that normally help locate people who skip bail. Public procurement documents showed a $180 million budget for such contracts.
Since October, ICE has spent at least $11.6 million on skip tracing services, 404 Media found contractors including B.I. Incorporated, SOS International LLC, and Global Recovery Group LLC.
At the same time, ICE is trying to keep track of its own employees. The agency is planning to upgrade its inner digital surveillance system to prevent leaks, which includes “surveillance of ICE networks, automated alerts for suspicious behavior, and routine analysis of logs pulled from servers, workstations, and mobile devices,” Wired writes.
Drones on the prowl
In the meantime, border surveillance is also about to get an upgrade: Customs and Border Protection agency is looking for better surveillance drones to patrol the borders.
According to Wired, CBP is seeking light-weight drones resistant to high heat, dust and wind, and that can detect movement from afar and relay coordinates to agents. The agency already has 500 drones patrolling the Mexican border, according to the Arizona Center for Investigative Reporting.
The upgraded drone fleet is expected add to CBP’s arsenal, along with mobile surveillance vans equipped with radars, high-powered cameras, wireless networking and artificial intelligence to monitor the border, which the Department of Homeland Security was looking to purchase earlier this year.
ICE has been actively playing with drones, too. Since 2021, the agency has been buying AI-powered drones from California-based contractor Skydio, Forbes reported. The drone, X10D, can automatically track and pursue a target and “detect objects, vehicles, and humans” on its own.
Eyes on Flock
Flock, AI-powered license plate readers widely deployed across U.S. cities, has been under scrutiny lately. The firm got into hot water earlier this year when journalists discovered it shared data from its network of AI-powered cameras with both local and federal authorities, including ICE. The cameras have also been used to monitor protest rallies, according to the Electronic Frontier Foundation.
Since the fall, people in various cities and states have been taking steps to stop the deployment of Flock cameras in their home towns, considering it an intrusive and unnecessary surveillance system.
Human rights advocates also joined the movement: EFF and the American Civil Liberties Union (ACLU) lawyers have sued the city of San Jose over its Flock deployment, on behalf of two California nonprofits, following a similar lawsuit by two citizens in Norfolk, Virginia.
Now, local politicians are joining the fray. In San Diego, city council members requested a review of the police department’s contract with Flock as it turned out the company hired a recently retired police captain who was involved in negotiating the contract. The council members noted that it was signed in 2023 without a competitive bidding process, Axios reported.
And in Texas, the local Department of Public Safety is investigating Flock for its handling of private security licenses – the agency claims the company did not maintain proof of the required liability insurance, Houston Chronicle writes.
Watch the watchers: new spyware alert
Reporters Without Borders (RSF) has uncovered a previously unknown spyware that has allegedly been used against journalists in Belarus since 2021. The spy app, dubbed ResidentBat by researchers, was discovered on the phone of a journalist who had recently been summoned for questioning by the country’s security service, KGB (yes, they still use the name inherited from the USSR). The journalist was asked to unlock his phone while KGB officers were watching and then leave the device in a lockbox.
The app does not contain any exploits that would allow it to gain root access to the victim’s phone, but it still collects a wide range of data, including SMS messages, information on incoming and outgoing calls, camera, files, Android browser bookmarks, clipboard contents, and audio from the device’s internal microphone. It also has screen-monitoring and device-administration capabilities.
Because the malware’s code contained the strings “bat” and “resident,” researchers named it ResidentBat. It remains unclear how many phones have been infected, and RSF has not disclosed the journalist’s identity for safety reasons.
***
And that’s all from tis this week, folks.
Stay vigilant!
Anna

