Humans, surrender
The rise of AI needs surveillance capitalism like your phone needs charging
Happy Friday, privacy-minded readers! We’re heading towards the end of another eventful week in digital surveillance and censorship around the world, and some pretty important things have come to my attention in the past seven days.
But first, a little philosophical detour.
Recently, I watched a couple of episodes of The Testaments, The Handmade’s Tale spinoff (anyone else?), and it made me think about the connection between privacy and human dignity: our ability to live our lives and connect to this world happily and on our own terms.
In Gilead, the dystopian fictional version of a religious fundamentalist America, privacy almost doesn’t exist: to fight the catastrophically low birth rates, the new government makes everything related to sex and reproduction a matter of extreme public importance.
Hence, every activity and event having to do with sex is strictly regulated and controlled, and disobedience is punished by death. Girls having their first periods go through an initiation ceremony, flirting is illegal, sex is allowed only within marriage and for the conception of a child, adulterers face public execution, and giving birth is a party with supporters and spectators surrounding the mother.
The result is omnipresent fear, loneliness, and longing for forbidden joys of genuine connection that are felt throughout the society. And the loss of privacy plays a huge role in this: if no part of your life, even the most intimate, belongs to you anymore, if there is no secret you can keep from the world, can you still feel the power and joy of existence? Is that still you? Are you still alive?
And of course, this loss of self amid total surveillance has been portrayed before – take Orwell’s 1984, the textbook example of a surveillance state image. But the more I read about digital surveillance in democratic states – let’s leave Iran, Russia and China aside for a moment – the more it seems to me that we are willingly sleepwalking into a world with no privacy. Only this world is not scary like Gilead or Orwell’s Oceania – it’s a sweet lullaby of comfort, effortless consumption and the burden of thinking your life through completely removed. Almost a paradise that feels deserved.
It is indeed deserved: we pay for the convenience of apps and digital services with our data, and we feed it generously into AI algorithms that are supposed to free us from the burden of unpleasant work, annoying routine, the energy-draining chain of little everyday actions, checks, and decisions. If an algorithm knows all about your life, it can automate it the way you want it – doesn’t that sound sweet?
Unless you think about how little control you have over this process. So try to think about it this way: the data you provide to make your job easier can – and will – be used to replace you. The data on your everyday tasks, errands and goals can – and will – help track your every move. The hopes, ambitions and worries you share with AI can – and probably will be – used to shape and manipulate your life choices.
Does this paradise of effortless comfort still feel warm and caring? Does the sense of being constantly seen and analysed pierce this harmony with a chilling undertone?
This week, I’ve learned a bunch of pretty disturbing things about the new ways governments and corporations are using AI-powered surveillance, and how good the technology is becoming day by day.
So, without any more ranting from me – let’s get into it!
Please take a moment to share and support this newsletter if you’re enjoying it.
Biometrics briefing
At least 13 people have been wrongfully arrested in the U.S. because of face recognition technology’s mistakes. – ACLU
The UK court has ruled that the use of live facial recognition tech by police is lawful and does not violate human rights. – BBC
AI startup Clarifai said it had deleted millions of OkCupid users’ photos after FTC scrutiny and a court settlement. – Reuters
ICE is looking at you
The U.S. Department of Homeland Security is developing smart glasses that will allow Immigration and Customs Enforcement (ICE) agents to identify people in real time, journalist Ken Klippenstein writes on his Substack.
According to the budget documents Klippenstein cites, the smart glasses will be able to automatically identify people from a distance and check them against government databases, as well as secretly record them. The device must be available to ICE agents by September 2027.
“It might be portrayed as seeking to identify illegal aliens on the streets, but the reality is that a push in this direction affects all Americans, particularly protestors,” an unnamed DHS attorney told Klippenstein.
Last week, dozens of human rights organizations signed a letter to Mark Zuckerberg, asking him to cancel Meta’s plans to add facial recognition to the company’s smart glasses. The organizations, led by the American Civil Liberties Union (ACLU), also demanded transparency on Meta’s plans to provide law enforcement and immigration agencies with facial recognition devices.
It’s unclear what private companies might be involved in the current smart glasses project by the DHS, but it looks like the omnipresent, clandestine facial recognition recording is coming to the streets of America.
On a slightly brighter note, recording ICE agents with drones won’t be criminally punishable anymore: the Federal Aviation Administration (FAA) has softened the rule creating a no-fly zone above DHS facilities and vehicles, 404 Media reports. It was done after a Minnesota journalist Rob Levine sued the FAA over the regulation.
Teach the robot that will replace you
Meta, the parent company of Facebook, Instagram and WhatsApp, is taking workplace surveillance to another level, announcing to its staff that AI will be constantly watching them work.
The corporation will install on U.S. employees’ computers a new tracking software that will capture mouse movements, clicks, and keystrokes to train Meta’s AI models, Reuters reports.
Meta is actively developing AI agents that can build, test, and ship its future products, while planning to lay off 10% of its workforce globally in May – and more later this year. Hard not to see the new bossware initiative as a preparation to replace humans with AI – built by the same humans that are slated to be sacked. Not to mention the unprecedented level of workplace surveillance, which probably comes as a bad surprise for the Meta staff.
However, Meta has been in the vanguard of surveillance capitalism for a while – employees probably don’t have big expectations of workplace privacy anyway. But even if you haven’t ever worked in a big tech corporation with AI ambitions, your work emails and Slack messages can still become AI training material.
SimpleClosure is a startup that helps failed businesses wind down their operations – and buys their entire digital footprint. Then, the data is sold to AI companies – the demand has been “insane”, the CEO told Forbes.
“There’s a feeling of a gold rush from these companies trying to get their hands on real-world data,” he said.
That’s not surprising: researchers have been saying for a while that when AI algorithms consume everything that’s ever been published online, they have to turn to more private communications. Having access to such data will be equal to owning a gold mine – SimpleClosure is ahead of the curve here, but it can’t possibly be the last.
AI agents are already reading your emails – unless you manually opt out. The terabytes of unencrypted messages stored on the messenger apps’ servers might be the next frontier.
SimpleClosure says the data gets stripped of personal information before being fed to AI – but there is no guarantee those messages still can’t be attributed because AI’s analytical capabilities are growing exponentially.
The algorithms have already learned to unmask pseudonymous forum accounts, identify writers’ native languages, find potentially identifying information in social media posts and infer users’ psychological traits, locations, incomes, sexes, and ages, among other things, MIT Technology Review’s Grace Huckins writes.
We leave traces of our identity around us, like tiny breadcrumbs, even when we try not to. A well-trained AI agent can chain them together like a team of professional detectives, once it knows what to look for. All the data is already there.
You’ve been priced
Have you heard about surveillance pricing? Companies gather information about customers while they are shopping online and then show them prices based on the implied wealth and consumer behaviour. Companies tend to deny – understandably – that they are spying on you to see if they can get you to pay more, but someone got caught red-handed this week.
An X user posted a tweet complaining that his JetBlue airfare went up sharply and he had no choice as he needed to attend a funeral. JetBlue replied by recommending that the user clear his browser cache and cookies, or open the page in an incognito window. The firm soon deleted the reply, but people couldn’t help but notice that if clearing the browser data affects the price displayed, it must not be the price everyone sees.
Congress has taken notice: Representative Greg Casar and Senator Ruben Gallego sent a letter to JetBlue CEO Joanna Geraghty asking whether the airline is actually using customers’ browsing data to adjust prices. The company was also sued by a New York resident for surveillance pricing.
JetBlue denied the allegations, saying that fares on the website “are not determined by cached data or other personal information,” the company does not use AI to set individual pricing, and “all customers have access to the same fares,” Financial Times reports.
In the meantime, the issue has already gained enough attention to prompt introduction of a slew of bills prohibiting or limiting surveillance pricing across America. According to Inside Privacy, more than 40 bills across 24 states have already been sponsored.
For example, a Maryland bill that is already awaiting the governor’s signature and will likely go into effect in October, would prohibit surveillance pricing for food, as well as dynamic pricing, requiring grocery stores to keep their prices the same for at least one business day.
It’s funny how the concept of surveillance pricing brings the spirit of a Middle Eastern bazaar into the refined world of Western commerce: a seller looks at you, defines your tax bracket by the way you dress, move, and speak, and gives you the price he thinks you’d pay. With one caveat: in a bazaar, you can bargain your way to a better price – at a U.S. shopping mall, you can just walk out with empty hands.
People still don’t like Flock
Flock, an American startup making automated license plate reading (ALPR) cameras, keeps getting into hot water across the U.S. The nation-wide system of traffic cameras has been notoriously used by ICE to look up immigrants targeted for deportation, utilized by police to monitor protest rallies, and at least once used to track a woman who terminated her pregnancy.
A handful of American cities have already canceled their contracts with Flock over those controversies, and some people have even started destroying the Flock cameras in their cities.
This week, some states have moved to limit the scope of surveillance through ALPRs. In New York, local lawmakers want to restrict the sharing of information obtained from license plate readers with federal agencies, Gothamist reports. A new bill introduced into the state legislature would make ICE first obtain a warrant from a federal judge establishing probable cause before the agency can get the data. Also, any data collected by the cameras would have to be erased after 48 hours.
In Colorado, state lawmakers are looking to prohibit government officials from sharing the data with outside jurisdictions, with some exceptions. The bill would also require agencies to get a warrant to search the databases if a crime they investigate occured more than 72 hours before.
In New Mexico, a new bill would mandate that out-of-state agencies seeking access to ALPR data would need to declare they’re not seeking the information for the purpose of immigration enforcement, “protected health care activity” or protected constitutional activity.
In Iowa and West Virginia, pending bills would restrict warrantless access to the footage after 24 hours.
A bill in Ohio prohibits commercial use of ALPR data.
A Kentucky bill, passed in March, mandates destroying the data on the ALPRs within 90 days, unless it’s tied to an active criminal investigation. The data cannot be sold or shared.
According to HaveIBeenFlocked.com, over 2.6 million license plates across the U.S. have appeared in searches by various U.S. agencies using Flock.
If you want to learn about Flock cameras in your city or connect with local anti-Flock activists, check out DeFlock, Eyes On Flock, and alpr.watch.
No more reading deleted texts
Last week, we learned that deleted Signal messages on iPhones could still be retrieved by law enforcement agencies using forensic data extraction tools. The messages’ previews were previously saved in the iPhone’s notification database, 404 Media found.
No more: Apple released an update fixing that bug this week, TechCrunch writes. After the issue was reported, Signal asked Apple to address it, Signal’s president Meredith Whittaker wrote on Bluesky.
“We are very happy that today Apple issued a patch and a security advisory. This comes following 404 Media reporting that the FBI accessed Signal message notification content via iOS despite the app being deleted,” Signal’s official BlueSky account posted on Wednesday.
Being watched in the arena
Sports and concert venues are becoming highly surveilled places around the world, adopting biometric technology for security, ticketing, and other purposes. New York City’s Madison Square Garden has become an epitome of the trend, Wired writes. The magazine published a detailed story of how the company’s owner, James Dolan, has turned the arena into a surveillance panopticon.
Facial recognition technology is used to record guests, assign them risk scores, and sometimes follow them around the venue – even when they present no danger, Wired writes.
For example, a Knicks fan Nina Richards, a transgender woman who attended games at Madison Square Garden regularly, reportedly became a fixation for the company’s security chief John Eversole. He directed employees to compile surveillance dossiers on Richards simply because she was a transgender woman, according to the lawsuit of a former staffer Donnie Ingrasselino, reviewed by Wired. Eversole believed that Richards should be kept “away from the players.”
It has been known for a while that Madison Square Garden is aggressively using its facial recognition system to ban from the venue people that Dolan doesn’t like, including lawyers from firms that are involved in disputes with the company – there may be more than 1,500 people banished from Dolan’s entertainments venues this way.
Dolan might be an especially egregious example of this trend, but he’s not alone. Biometric surveillance is becoming routine at sports venues as companies incentivise attendees to exchange their biometric data for discounts and perks, Wired writes: “In that sense, Dolan isn’t an outlier; he’s a model.”
Would you trade your face or palm print for a discounted ticket?
***
And that’s all from me for now, guys. I’m taking next week off for a little breather, so you’ll hear from me in two weeks.
Stay vigilant!
Anna

