A tale of two worlds
When surveillance becomes omnipresent, is there a way out?
Welcome, Control, Spy, Delete readers! Spring is coming to the Northern Hemisphere, but the creeping spread of digital surveillance adds a chilling feel to the warm May days.
One of the important things I learned this week is that Meta may be able to read WhatsApp messages, according to government communications cited by Bloomberg. WhatsApp added end-to-end encryption in 2016, meaning that only the sender and receiver are supposed to see the content of messages. If, in fact, Meta has access to users’ messages, it will have vast implications for privacy.
Meta has big ambitions with AI. And given that AI has already exhausted most of the publicly available content on the internet for its training, unencrypted private messages seem like a lucrative resource to make the models even better at understanding humans, replicating them, analysing them – as well as manipulating them. And WhatsApp might be just one of the many messenger apps that are a little less private than they advertise.
This Black Mirror stuff is pure speculation so far, but are we far away from finding out whether it can become our new reality? And in the worst cases scenario, will privacy still be possible in the digital world – at all?
Yes, there are other messengers and there are various unorthodox tech protocols that offer a clumsier version of online messaging than what we are used to. I wonder if a new generation of low-fi, non-commercial communications infrastructure will emerge against the backdrop of a world where nothing you do online can be your secret anymore.
Will the digital realm split into two parallel realities: the mainstream, comfortable world where we all live in a glass aquarium, and the cypherpunk underground where a stubborn minority lives under the radar, forever under suspicion for their non-compliance with the new shiny surveillance state?
I think we’ll see very soon, and it will be an interesting time. Unless an AI apocalypse ends it all! Just kidding. I don’t know about you, but I’m always here for interesting times.
So let’s get into it, shall we?
Always love to see new subscribers – take a minute to share this newsletter with your privacy-minded friends if you’re enjoying it!
Biometrics briefing
Meta will start analyzing profile pictures with AI to check whether users are older than 13 in the EU and Brazil. – Meta
New York City Hall has asked Lyft not to use facial recognition tech to verify City Bike riders’ age. – New York Daily News
A new bill in Louisiana suggests biometric age verification for bar goers. – Biometric Update
What can WhatsApp see
A big story I missed last week, which absolutely deserves your attention: Bloomberg reports that the U.S. Commerce Department’s Bureau of Industry and Security closed its investigation into claims that Meta can read users’ WhatsApp messages. The company is denying the allegations, which have also become the basis of a lawsuit filed in San Francisco in January.
According to Bloomberg, an unnamed agent with the Office of Export Enforcement emailed multiple officials in other agencies sharing his preliminary findings after 10 months of investigation. WhatsApp messages are believed to be end-to-end encrypted, but according to the agent, content of the messages is stored on WhatsApp servers unencrypted. His conclusions, cited by Bloomberg:
“There is no limit to the type of WhatsApp message that can be viewed by Meta. The misconduct of Meta and its officers, including current and former high-level executives, involve civil and criminal violations that span several federal jurisdictions… Meta can and does view and store all the text messages, photographs, audio and video recordings,” the agent said in an email in January, adding that people with access to the messages’ content includes contractors and a “significant number of foreign/overseas workers in India.”
However, soon after he sent that, the agency closed the investigation at the direction of senior leadership, according to Bloomberg. Sounds magical…
And you might think: who cares about my texts to mom? Who would bother to read all that? Probably, not humans. But is Meta can read your messages, will it resist the temptation to use terabytes of personal communications to train its AI models? And when those models know everything about your life, who would be able to take advantage of that knowledge? Better be people who wish you well.
A DNA database of protestors?
Four Chicagoans are suing the U.S. Department of Homeland Security (DHS) for gathering and storing their DNA information, Ars Technica reports. The plaintiffs protested ICE raids in Chicago this winter and were arrested near the Broadview ICE facility.
They are accusing the federal government of “wrongfully arresting peaceful protesters, collecting their DNA, uploading their genetic profiles to government databases, and storing their DNA samples in federal labs – permanently.” None of the protestors had been convicted of any crime, the complaint states, except for one, but their genetic information was collected anyway. The one convicted activist pleaded guilty to concealing a prior felony charge.
Under the current law, law enforcement can collect the DNA information of a person they arrest to identify them, but only if they have been arrested with probable cause for a serious offense, which has been confirmed by a judicial officer. None of that was true for the arrested protestors at the ICE facility, the lawsuit reads.
The rules of DNA collection in Illinois are even stricter and only allow the procedure for people arrested for first degree murder, home invasion, or sexual assault, and only after a green light from a judge or jury.
The plaintiffs fear their genetic information can be used for unwarranted surveillance, and that the uncontrolled DNA collection, if not stopped by the court, can lead to creation of a vast DNA database of people who protests the current administration’s policies.
The U.S. is definitely moving towards universal DNA databases of both immigrants and citizens, regardless of whether they have any problems with the law. How can such information be used? It’s hard to envision hypothetical scenarios until we see real ones, but we do know that once you submit your DNA, you basically put all your family into the database – whether they agreed to that or not. A novel technology called forensic investigative genetic genealogy, or FIGG, helps reconstruct one’s DNA using their relatives’ data, MIT Technology Review wrote last year.
Our faces don’t belong to us anymore, thanks to facial recognition technology; our digital footprint is food for AI algorithms. But if you don’t want your DNA in that surveillance mix – it’s hard to say no to a police officer with your hands in handcuffs, but at least, maybe don’t order that genetic testing kit on the internet. Just a thought.
The face of surveillance
404 Media has confirmed that DHS is working on a project for surveillance smart glasses for ICE agents.
The project was first revealed by independent journalist Ken Klippenstein, who found it in the DHS budget documents. A DHS official has confirmed the plans to 404 Media, along with another person who attended a conference where a senior ICE official spoke.
The smart glasses will “supplement” Mobile Fortify, a mobile facial recognition app ICE agents use on their phones to identify both people they target for deportation and Americans who protest their raids. Protestors filming ICE raids can be identified this way so that later the DHS would run background and criminal history checks on them, look up their social media accounts, and track their license plates.
According to the documents Klippenstein cited, the smart glasses, which ICE is expecting to get next fall, will help agents automatically identify people from a distance, check them against government databases, and secretly record them.
Some ICE agents are already wearing smart glasses by Meta during their raids, however, according to the agency, those are personal devices and cannot be used to film people during the operations. Earlier this year, multiple human rights organizations asked Mark Zuckerberg to cancel Meta’s plans to add facial recognition to the smart glasses, as well as reveal whether the company was planning to provide law enforcement and immigration agencies with facial recognition devices.
It’s not clear who the DHS contractor is or will be for the facial recognition goggles, but the agency will allocate $7.5 million for the project, according to NewsNation.
“Bring me that user”
In the meantime, DHS surveillance does not stop at the border – it goes wherever it can reach via the long hands of American tech companies.
The agency is demanding location data of a Canadian who criticized ICE raids in the U.S., particularly the killings of Renee Good and Alex Pretti in Minneapolis, Wired reports. DHS is using a customs summons – a type of administrative subpoena that is normally used for investigations related to import activities. But the man in question hasn’t even been to the U.S. for more than ten years, his American Civil Liberties Union (ACLU) lawyer maintains.
The summons did not provide a reason for the investigation but contained a request to Google not to notify the man. The company did it anyway.
The DHS is actively using administrative subpoenas, which are much easier to obtain than judicial subpoenas, to unmask social media users who criticize ICE. The agency issued hundreds of such requests to Google, Reddit, Discord and Meta last year. When those don’t work, DHS is willing to take more extraordinary steps.
In April, the government summoned Reddit to appear before a grand jury and provide information about one of its users after failing to get the same information through an administrative subpoena (it was successfully contested in court). The tactic has no precedent so far, according to experts – but it may become a precedent for other cases. It’s not clear whether Reddit ended up providing the information or not.
But that’s just another reminder that the current U.S. administration wants to know everyone who criticizes it, and keep tabs on them.
On the other hand, maybe very soon, government agencies won’t even need disclosures from digital platforms – they will be able to unmask anyone who is writing regularly on the internet with AI. According to recent research, AI models have already demonstrated the ability to match pseudonymous accounts to LinkedIn profiles, analyse online posts for potentially identifying information about authors; infer users’ sexes, ages, psychological traits, and more.
Writer Megan McArdle tested Anthropic’s Claude to see if it can recognize her as the author of some unpublished pieces. The model did great, she wrote in her column for the Washington Post. And while her writing is abundantly present on the internet with her name attached to it, there is no guarantee the algorithm won’t be able to identify less public authors, and that may be a major danger to journalism:
“Journalism often relies on anonymous sources. So does law enforcement. What do we do when a stray quotation could pinpoint who’s speaking?” McArdle asks.
I think we will find out very soon.
***
But in the meantime, it’s all from me for this week, guys.
Stay vigilant!
Anna

