First blood
Do tech companies have the will and resources to stop the emerging AI surveillance in its tracks?
Hello and happy Friday, readers! The snow is melting but the news of digital censorship and surveillance keeps piling up – I’ve done some shovelling for you and can’t wait to share.
The contradictions between the U.S. government’s tech surveillance agenda and the people directly involved in making it work took the most dramatic form at the end of last week.
Anthropic very publicly fell out with the Pentagon over how it could use Anthropic’s AI product: the firm did not want to do anything with AI-powered mass surveillance or killer drones. The Pentagon, apparently, wanted both and absolutely wasn’t having Anthropic’s ethical qualms.
The result? Anthropic was kicked out of the deal, OpenAI gleefully replaced it, Donald Trump and Pete Hegseth fired off angry social media posts. Anthropic was also designated a supply chain risk – something normally reserved for foreign companies.
Any battle between a state and a corporation is lopsided in favour of the former: even if a corporation is very big, any minimally viable government – let alone the U.S. government – wields far greater power. Anthropic is not oblivious to that: already its CEO Dario Amodei had to apologize for his tone commenting on the situation in a leaked corporate memo and say the company will provide Claude to the Pentagon “at nominal cost and with continuing support from our engineers, for as long as is necessary to make that transition, and for as long as we are permitted to do so.”
With the stakes continuously rising, the ultimate question looms: will the ethical principles survive this battle of wills? Will the tech industry emerge from this with some dignity intact – or completely broken and bent to the will of the newly belligerent state?
But as long as there are people raising their voices against the excesses of power – we have something to hope for. And we had some examples this week!
So let’s get into it.
If you enjoy this newsletter, please take a moment to share it with a friend or two! I appreciate and celebrate every new subscriber.
Biometrics briefing
The New York City Council is discussing two bills that would prohibit retailers and landlords from using biometric recognition to identify people. – Biometric Update
UK retail chains are testing facial recognition tech to prevent shoplifting. – Financial Times
The UK police will pilot a mobile app for facial recognition in London. – The Guardian
Switzerland is postponing the launch of its national biometric ID due to public distrust in the safety and security of the technology. – Heise Online
Sam Altman’s World Foundation will deploy its iris-scanning Orb devices in retail stores across Japan. – ID Tech Wire
Department of War vs. ethical AI
An important battle around large-scale state surveillance took place at the end of last week. Anthropic, which used to be a key provider of AI tools for the U.S. government, clashed with the Department of War and its head Pete Hegseth over a demand that Anthropic’s product not be used for mass surveillance of Americans and for autonomous lethal weapons.
Negotiations ended up in an epic fallout between the U.S. government and Anthropic: the Pentagon designated Anthropic a “supply chain threat” (which the company is planning to challenge in court) and Trump called the firm “leftwing nut jobs” in a Truth Social post.
OpenAI seized the moment and, without losing a beat, swept up the contract which had just slipped out of its rival’s hands. The deal, which OpenAI CEO Sam Altman later had to admit “looked opportunistic and sloppy,” immediately led to a surge of ChatGPT uninstalls.
Employees of OpenAI and Google responded with a joint open letter calling for a collective rejection of unethical demands from the U.S government:
“We hope our leaders will put aside their differences and stand together to continue to refuse the Department of War’s current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight,” says the letter, titled “We Will Not Be Divided” and signed by 885 Google employees and 100 OpenAI employees.
OpenAI claimed the deal included the same restrictions as those demanded by Anthropic. Anthropic’s CEO Dario Amodei called the statements “safety theater” and “straight-up lies” in a memo to his employees, according to The Information.
The Electronic Frontier Foundation reasonably notes that the way AI tools are used cannot be left to either the U.S. government or Big Tech to decide, as both have dubious records with privacy. Instead, a new comprehensive law should be produced by Congress.
What’s at stake here? The U.S. government holds a vast amount of data about everyone living in the country – and regularly buys more from private data brokers. Putting all this data together and feeding it to artificial intelligence algorithms will create a digital surveillance superpower – and most probably annihilate privacy as we know it. Take it from Amodei’s blog post commenting on the controversy:
“AI-driven mass surveillance presents serious, novel risks to our fundamental liberties. To the extent that such surveillance is currently legal, this is only because the law has not yet caught up with the rapidly growing capabilities of AI. For example, under current law, the government can purchase detailed records of Americans’ movements, web browsing, and associations from public sources without obtaining a warrant, a practice the Intelligence Community has acknowledged raises privacy concerns and that has generated bipartisan opposition in Congress. Powerful AI makes it possible to assemble this scattered, individually innocuous data into a comprehensive picture of any person’s life – automatically and at massive scale.”
The U.S. government is currently pooling together data held by various agencies under the auspices of the Department of Homeland Security, while ramping up surveillance capabilities. A hacker group going by “Department of Peace” reportedly breached the DHS and published the agency’s contracts with more than 6,000 companies, TechCrunch reported. If you need a refresher on the U.S. tech surveillance arsenal, check out Wired’s recent summary of DHS’s purchases.
The Department of War is a different agency, but the trend is clear: the government wants all data in one place, easily accessible and searchable for any chosen purpose. AI would come in quite handy for that.
AI surveillance vs. human privacy and dignity – this battle is happening in the U.S. and will soon unfold in many other places around the world. In some jurisdictions, it’s already lost.
When ads are spying on you
Another detail of the current surveillance state build-up in the U.S. by 404 Media: reporters have gotten their hands on Customs and Border Protection’s documents explaining how CBP buys data about cell phones’ locations from the online advertising marketplace.
CBP takes advantage of a procedure known as real-time bidding, or RTB. It works this way: when an app shows you an advertisement on your phone (it can be a dating, weather, or news app, or a mobile video game, for example), the advertiser records data about that particular device: consumer activity, the date, time and location of the interaction. Then this information, tied to your device’s advertising IDs, is used to sell advertisers a chance to show you ads – they start bidding to demonstrate their ads to certain demographics.
The data does not include the name or phone number of the device’s owner, but it still allows advertisers to track the phone’s location, 404 Media writes. And it can be purchased not only by commercial companies seeking to run ad campaigns, but by government agencies as well.
The practice isn’t new, either: back in 2020, The Wall Street Journal reported that CBP and ICE purchased commercial location data, which helped them identify and arrest immigrants. In 2025, the Federal Trade Commission found that the vendor, called Venntel, collected the data without consent and sold it illegally.
Lawyers against the machine
In the meantime, the Electronic Frontier Foundation and American Civil Liberties Union filed a joint amicus brief in a court case considering an illegal export investigation. The EFF and ACLU are asking the U.S. Court of Appeals for the Third Circuit to require a warrant for searches of electronic devices at the U.S. border.
Currently, customs agents have broad discretion to search devices, and there is little passengers can do. This intrusion on privacy does not benefit legitimate investigations, the EFF argues: searches at the border are designed to reveal the smuggling of prohibited goods, but electronic devices themselves can only contain digital contraband. And prohibited digital materials cannot be stopped simply by searching passengers’ devices, so there is no point in the searches anyway.
Border searches are not designed to find evidence for criminal prosecution, so data found on electronic devices that were searched without a warrant at the border “cannot and should not be used as evidence in court,” the EFF says.
Journalists raise their voices
Modern media corporations are large and complex organizations, often combining a variety of functions: while one part might be covering the excesses of the “Department of Government Efficiency,” another one might be catering to ICE.
This week, reporters at two large media corporations voiced concerns over their parent companies’ relationships with ICE, Poynter Institute writes. Over 200 journalists at Law360, a legal news outlet, demanded that their parent company RELX stop working with the Department of Homeland Security and providing the agency access to the NexisLexis database of public records.
At the same time, a group of Reuters journalists called for their parent company Thomson Reuters to explain whether it had done any “human rights and civil liberties due diligence” before signing several contracts with DHS.
Journalists are following tech workers who have become increasingly disturbed by their employers’ collaboration with the DHS: in January, over 2,300 employees of Microsoft, Apple, Google, Amazon, Oracle, Meta, and other companies published an open letter asking their leadership to cut ties with ICE. Employees of Salesforce and Palantir also voiced concerns to their management internally.
Censor leave the kids alone
Age-verification laws are quietly sweeping the West, and privacy experts see a disturbing trend in that. In 2025, several jurisdictions took measures to shield minors from harmful online content, implementing age-verification requirements for online platforms. In particular, the UK rolled out its Online Safety Act; French courts ruled that porn websites can check users’ ages; Australia banned kids under 16 from social media; and half of U.S. state now have some kind of age-verifications laws.
On Monday, hundreds of security and privacy researchers around the globe published an open letter calling for a moratorium on age verification until a consensus is reached as to how to deploy it safely and efficiently.
First of all, the existing measures to prevent minors from harmful content don’t work, the researchers argue. People can use VPNs to circumvent restrictions imposed by their countries, as well as employ AI tools or simply borrow someone else’s ID to pose as adults. The global infrastructure to perform age verification reliably regardless of location simply does not exist and is unlikely to emerge, given the wide variety of approaches to regulation across different jurisdictions.
At the same time, users are forced to either disclose personal information to verification service providers or abandon compliant platforms and use alternatives that might expose them to malware and scams. Verification platforms themselves can potentially abuse their access to vast amounts of personal data or get compromised – like Discord’s verification partner got breached in October, exposing the data of 70,000 users to malicious actors.
As an alternative, researchers suggest regulating the addictive social network algorithms and surveillance-based advertising practices that use minors’ data to aggressively sell them goods and services.
Street cameras bleeding war secrets
Iran is under a total internet shutdown again – connectivity started to recover at the end of February, but monitoring groups say the country has now been offline for over 120 hours.
Keeping the country mostly away from the World Wide Web did not help the ayatollahs’ regime prevent the deadly attack on Ali Khamenei, the country’s supreme leader, according to the Financial Times. The newspaper reports that nearly all the traffic cameras in Tehran had been hacked for years and were broadcasting footage to Israel intelligence service.
The stream of real-time data allowed Israel and the U.S. to see when Khamenei would be in his office on last Saturday’s morning, when the missiles struck. Israel also reportedly hacked about a dozen cell phone towers in the area, flooding the phones of Khamenei’s security detail and preventing them from receiving warnings, according to the FT. “We knew Tehran like we know Jerusalem”, an Israeli intelligence official told the newspaper. The operation was carried out by Israel’s intelligence Unit 8200.
The irony of this situation is too bitter to be funny, but instructive nonetheless.
From Russia with facial recognition
Iran will probably have to reconsider its surveillance practices, but for now, the regime keeps tracking citizens with facial recognition technology sold to the country by Russia, Forbidden Stories found. The technology is also capable of mapping people’s social connections, as well as tracking cars’ license plates.
In 2019, a Russian surveillance tech firm with government ties, NTech Lab, partnered with an Iranian company Rasad Intelligent Technologies. Later, Rasad was merged with Kama, a company led by a member of the Revolutionary Guards, which took over the distribution of NTech Lab’s product, FindFace. Then in 2020, Kama granted another government-linked company called BPO a lifetime license for the facial recognition software, according to Forbidden Stories.
Since then, the technology has been deployed in subway stations in Iran’s major cities – similar to how FindFace is used in Russia.
FindFace, which has been reportedly used by Russian authorities, among other purposes, to track and detain people involved in protests, is more powerful than any similar products available on the Iranian market, Forbidden Stories’ sources say. According to a sample of videos from the system reviewed by reporters, the software can track faces in motion, along with details like estimated age, gender, detected emotion, whether the person has a beard or is wearing glasses or a mask.
NTech Lab is a successful Russian startup with a complicated history. The company, founded in 2015, got into the global spotlight almost immediately: in 2025, it won the prestigious MegaFace Challenge by the University of Washington. In 2017, it also won the Facial Recognition Challenge by Intelligence Advanced Research Projects Activity (IARPA) – a notable recognition by the U.S. intelligence community.
NTech Lab even received an invitation from YCombinator, but chose to work with Russia’s government-backed corporation Rostech and focused on piloting its technology on the streets of Moscow – in 2017, Russia outpaced China in the scale of such trials, The Bell wrote. The technology was also provided to police during the 2018 World Cup in Russia and reportedly facilitated multiple arrests of people on wanted lists.
After the beginning of Russia’s invasion of Ukraine, NTech Lab co-founders reportedly failed to convince the company’s management to relocate the team and ended up resigning and leaving Russia, according to Reuters. In 2022, NTech Lab had 1,100 clients globally, including dozens of U.S. companies like Intel, SpaceX, Dell, and Philip Morris, Business Insider reported based on a leaked document.
In 2023, the firm was sanctioned by the EU for human rights violations. In 2024, it was added to U.S. Department of Commerce Bureau of Industry and Security Entity List, which includes organisations that act “contrary to the national security or foreign policy interests of the United States.”
But it turns out, there are always clients who don’t care about sanctions.
***
And that’s all from me this week, folks.
Stay vigilant!
Anna

