Small battles
Every way of fighting mass surveillance hits differently
Happy Friday, Control, Spy, Delete readers! Your weekly batch of digital surveillance and censorship news is here.
Technology is empowering – but it doesn’t always empower you. In the hands of a government or a large corporation, it can, on the contrary, subject individuals to the mercy of an algorithm. And the very presence of that algorithm and scope of its power is often a mystery for an ordinary citizen not familiar with the latest trends in technology of control.
You might not know why a facial recognition system flagged you for arrest or ejection from a place where you had every right to be, but you might get arrested and kicked out anyway. As technology is becoming increasingly sophisticated and powerful, its human operators tend to trust it more and lose the ability to question its output.
Fortunately, not everyone is willing to stay complacent – some people sue governments and corporations over unwarranted surveillance, others educate fellow citizens, and some come up with guerilla countermeasures of their own. I personally expect a boom in apps and devices designed to detect surveillance tech around us soon.
In the meantime, we’ll continue documenting the expansion of digital surveillance, its inevitable screw-ups, and the legal battles around it. Without further ado, let’s get into it!
If you’re enjoying this newsletter, help me spread the word and keep more people aware.
Biometrics briefing
To prevent the facial recognition of jury members, a California judge ordered Meta staff to remove their smart glasses during hearings on the mental health effects of Instagram and YouTube. – Biometric Update
Neutrogena’s parent company has agreed to pay $4.7 million to settle a class action lawsuit alleging it unlawfully collected and stored the facial geometry of Illinois residents through its Neutrogena Skin360 app. – Bloomberg Law
Pentagon has been studying how eye movements may reveal deception. – Biometric Update
You got the wrong guy… again
Yet another case of facial recognition technology misuse has led to a wrongful arrest – this time in the UK. The country has been ramping up live facial recognition surveillance in major cities for months, deploying specialized vans and adding facial recognition features to street cameras.
But the technology, which police in many cases tend to trust more than other evidence, once again proved to be far from ideal. Alvi Choudhury, a 26-year old software engineer, was arrested as a suspect in a burglary he did not commit, which occurred 100 miles away from his home in Southampton, The Guardian reports.
In January, Choudhury spent 10 hours in jail before being released without charges because a facial recognition system mistook him for the burglar. Even though the local police department claimed the results had been checked by a human, police officers agreed with Choudhury on the spot he did not really look like the suspect – the only similarity was actually his curly hair, according to The Guardian.
UK police use technology developed by the German firm Cognitec, which searches the national database of 19 million police mugshots. Choudhury’s mugshot was in the system because of another wrongful arrest: in 2021, he himself was attacked on a street, briefly arrested and released without charges.
Facial recognition technology is infamously biased against non-White people, and Cognitec in particular has been reported to have a much higher rate of false positives for Black and Asian faces than for White ones.
Last August, a similar debacle occurred in New York. A police facial recognition system identified Trevis Williams as an Amazon delivery driver who had flashed a woman in a Manhattan building. The fact that Williams was taller and larger than the suspect, and, more importantly, was driving from Connecticut to Brooklyn at the time of the accident, did not save him from spending two days in jail.
A wrongful arrest is an extreme case but smaller abuses are still damaging. Warren Rajah was kicked out of a Sainsbury’s supermarket in London without explanation. Turned out, the Facewatch facial recognition system the store chain used had mistaken him for another shopper. But before learning that, Rajan had to prove his innocence without ever being told the accusations: he was instructed to scan a QR code, go to Facewatch’s website, send a picture of himself and his passport – to eventually be “acquitted.”
And that’s literally what is the problem with the current global trend of making national law enforcement agencies reliant on biometric surveillance: a machine makes a mistake, but it’s a human who ends up humiliated and hurt. And the more automatic these systems become, the less agency a single citizen has to challenge them.
One big, beautiful biometric dragnet
Reliable or not, the U.S. government is betting big on biometric technology. The Department of Homeland Security is planning to merge its biometric databases into one big system that will allow searches of faces, fingerprints, iris scans, and possibly voice samples across law enforcement agencies, Wired reports.
According to documents reviewed by reporters, the DHS is seeking to buy a single “matching engine” for systems currently operated by Customs and Border Protection, Immigration and Customs Enforcement, the Transportation Security Administration, U.S. Citizenship and Immigration Services, the Secret Service, and DHS headquarters. The initiative comes amid the absence of a comprehensive national policy governing biometric data: the government has revoked the policy adopted during the Biden administration but hasn’t yet published a new one.
Not to mention privacy and cybersecurity risks of conflating these different systems, they also operate on different principles, Wired notes: biometric ID systems have a higher threshold for the image quality and are designed to give a single definitive result, whereas systems developed for investigations can work with lower-quality photos and produce a range of potential matches.
At the same time, the U.S. government has just made its own work less transparent for journalists and researchers, 404 Media noticed: the FPDS.gov website, which used to contain information about government contracts, has been shut down. It’s being replaced by SAM.gov, which is much less user-friendly, according to 404 Media’s Joseph Cox.
Observers fight back
In the meantime, two Maine activists have sued the DHS and Secretary Kristi Noem for allegedly using surveillance technology to intimidate ICE observers and thus violating the First Amendment, Politico reports. Since January, activists in different states who followed and filmed ICE raids reported being confronted by the agents and told they would be entered in a database of domestic terrorists.
According to journalist Ken Klippenstein, the DHS is actually gathering information about people filming ICE raids and conducting background checks on them, looking up their social media accounts, tracking license plates, and running criminal history checks.
The DHS is also bombarding popular digital platforms like Google, Reddit, Discord and Meta with hundreds of administrative subpoenas for personal data of users who criticize ICE. On Wednesday, three Democratic congressmen asked Apple, Amazon, Discord, Google, Meta, Microsoft, Reddit, Snap, TikTok and X to disclose how many subpoenas they had received and what type of information the DHS had requested, The New York Times reports.
Chicago’s anti-surveillance fighters
MIT Technology Review published an extensive case study on how Chicago civic groups pushed back against mass surveillance policies and practices in their communities. The city has the highest number of street cameras per capita in the U.S. – up to 45,000 – and one of the largest license plate reader systems.
MIT Tech Review interviewed local activists campaigning against street cameras, Flock license plate readers, gunshot detectors, and the way these technologies are being deployed mostly in Black and Latino parts of the city. Some of Chicago activists’ tactics have become a blueprint for privacy advocates across the country, so their first-hand accounts are absolutely worth a read.
The most interesting part for me was about ShotSpotter, the gunshot sound detection tool Chicago previously deployed in public spaces. In recent years, several people were arrested after being near these sensors, revealing a troubling pattern: the system generated alerts even when no shooting had occurred, but police still used them as a pretext to arrest people and then charge them with something completely different.
Following a public campaign, Chicago’s current mayor Brandon Johnson ended the city’s contract with SoundThinking, the company behind ShotSpotter, in 2023.
Tangentially related: people across the U.S. keep destroying Flock license plate cameras, journalist Brian Merchant writes in his newsletter.
Stalkers detected
A hobbyist developer Yves Jeanrenaud has created an app that notifies you when someone is wearing smart glasses nearby, 404 Media reports. The app detects Bluetooth identifiers that smart glasses broadcast and can be downloaded from the Google Play Store or GitHub.
Jeanrenaud made the app as a response to a new trend when people wear smart glasses and film other people without consent as these devices offer a much more discreet way to record a video than a camera or smartphone. Meta’s Ray Ban smart glasses normally have a red light on when video is being recorded, however, there are tricks to switch it off.
To make matters worse, Meta is planning to add a facial recognition feature to its smart glasses, potentially turning every device into an AI-powered biometric surveillance tool. The company is not oblivious to the privacy implications of this update and a potential public backlash it could provoke but hopes the current tumultuous political climate in the U.S. would keep the public too busy to notice, The New York Times reported.
“We will launch during a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns,” reads a document from Meta’s Reality Labs cited by NYT.
Russia vs. Telegram (again)
Russia’s cold war with Telegram may be about to enter a hot phase: according to Russian media, the country might soon ban the app completely. According to the sources of RBK, the messenger might get blocked starting April 1. The Bell reported that Russian authorities have already informed major telecom providers of the decision.
Russian authorities also have reportedly opened a criminal case against Telegram’s founder, Pavel Durov, for “assisting terrorist activities.”
At the same time, Russian troops fighting in Ukraine will reportedly be permitted to use Telegram as the home-brewed messenger Max has been deemed insufficiently secure. All the while the government is trying to force the rest of the population to use Max by making it obligatory for official communications and access to government digital services. Sounds like an absurd joke, but it’s not.
Russian internet censor Roskomnadzor first tried to block Telegram in 2018 and failed. In 2020, the agency announced that Durov “voiced readiness to fight terrorism and extremism,” so it would stop trying to block the app. However, last summer, Russia blocked calls in Telegram. Earlier this year, Russians using Telegram without a VPN reported significant performance issues.
Who will prevail this time: Roskomnadzor, VPN technology, or Durov’s diplomacy skills? And why, for sanity’s sake, the deadline is April 1?..
I guess we’ll find out soon.
Watch the watchers: Intellexa execs convicted
Four top managers of Intellexa, the maker of the Predator spyware, have been sentenced to prison by a Greek court for deploying spyware and violating the confidentiality of communications, Bloomberg reports.
Greek politicians, civil servants, and journalists fell victims to a massive spyware attack in the summer of 2022, when their phones were infected with Predator. The government’s involvement hasn’t been formally proven and no public official has been charged, although the country’s secret service may have financed the campaign, according to Bloomberg.
So far, the Greek court only convicted four key figures of Intellexa: Tal Dilian, the founder of the company and a former Israeli intelligence agency official; Yannis Lavranos and Sara Hamou, who were legal representatives and administrators of Intellexa; and Felix Biggio, who helped distribute the spyware.
Intellexa was sanctioned by the U.S. in 2024 for targeting U.S. citizens, but in December 2025, the Trump administration lifted the sanctions on three people in the Intellexa leadership, including Hamou. According to the official explanation, they had “demonstrated measures to separate themselves from” the firm. The rationale did not satisfy five Democratic senators who demanded a briefing on the matter last week.
Tips and Tricks: Go European
For people reconsidering their relationship with the U.S. tech corporations, The Guardian provides a list of Europe-based alternatives to popular online apps and services: privacy-focused browsers, email services, social media and online shopping solutions. Many of the options emphasize ecological responsibility and take a less exploitative approach to users’ data than their dominant U.S. counterparts.
***
And that’s all from me for now, guys!
Stay vigilant.
Anna

