The Great Algorithm
The future of AI surveillance is already here, and yes, you can take a look
Welcome, readers! Another weekly portion of digital censorship and surveillance news is here for you.
I’m Anna Baydakova and my day gets brighter every time I see a new subscriber – please share this little newsletter with anyone who might also enjoy it.
I often say (you might have noticed) that the future of digital surveillance is artificial intelligence watching us, predicting our every move and controlling how we follow the rules. And AI chatbots will be main enforcers of censorship and propaganda.
Well, a glimpse of that future is available in China: the country is apparently leading the way in deploying the algorithms that help the Party control the population. “City brains” for predictive policing, “smart courts” for automated justice, “smart prisons” monitoring inmates’ moods – a lot of (scary) innovations of this sort is already in place, and more is currently being developed.
On the front of online censorship, Chinese chatbots like DeepSeek are prohibited by law from deviating from the Party’s ideological line, and it shows.
And China is more than just one nation infamous for its surveillance state – it’s a trendsetter for many nations and economies relying on Beijing for trade and investments. Will the rest of the world see the Chinese approach as a cautionary tale or a model to emulate? The answer might be not as obvious as it seems.
We’ll delve into that, and some more stories from the U.S., Russia and Myanmar.
Let’s get into it!
Biometrics briefing
The U.K. police is planning to expand its use of facial recognition technology nationwide. The Home Office spent £12.6 million on the tech last year. – Sky News
Police in the Canadian city of Edmonton are piloting body-worn cameras with facial recognition. – CityNews Edmonton
Germany has opened the way to biometric identification in healthcare. – ID Tech Wire
Nigeria rolls out its national biometric travel passport. – ID Tech Wire
China and the reign of AI
If you’d like a sneak peek at how the AI-powered surveillance and censorship of tomorrow may look, don’t miss the new report by the Australian Strategic Policy Institute. Researchers made a comprehensive study of how the Chinese government is using artificial intelligence to enhance its technologies of control.
The scope of the Chinese Communist Party’s grip on public life in the country is no secret, but the cutting-edge, constantly self-improving technology is making that control even more powerful. And given China’s expanding global influence, especially in Asia and Africa, whatever power it delegates to AI systems to control citizens will inevitably become a blueprint for other nations.
I already told you about Chinese chatbots that are fully compliant with the state censorship and are not allowed to provide any answers not aligned with the party line. The new report provides an explanation:
“Under the newly issued national standard Cybersecurity Technology: Basic Security Requirements for Generative AI Services, providers must screen their training data and exclude any source in which ‘illegal or undesirable information’ exceeds 5%.”
But the government also wants to use AI to control what citizens say online. Although AI censorship systems still depend on human reviewers, they are already scanning “vast volumes of digital content, flag potential violations, and delete banned material within seconds,” the report says.
China is making a special effort to monitor online speech of ethnic minorities – especially in Uyghur, Tibetan, Mongolian and Korean languages. The current large language models (LLMs) like DeepSeek have not demonstrated strong performance in that yet, so the government is offering incentives for private companies to develop such capabilities.
AI also plays a key role in development of predictive policing algorithms: systems using hundreds of thousands of cameras deployed across cities to detect suspicious behaviour and dispatch police officers immediately.
In another striking development, China is already trying to use AI to automate its judiciary system, the report says. Quote: “Shanghai’s AI-enabled criminal case handling system, the first of many in the country, integrates, reviews and compiles evidence for procurators and can even recommend sentences. Defence counsel can’t see or challenge the underlying model.”
And of course, one of the most surveilled places in any country is prison – and China is using AI to monitor inmates and analyze their behaviour in real time, the report says.
“Yancheng Prison, which houses high-profile prisoners, was being upgraded with smart-prison technology, including an extensive network of cameras and hidden sensors likened to neurons, which fed information to an AI-powered computer that can track inmates around the clock. The system also generates a report on every inmate at the end of each day, based in part on behavioural analysis facilitated by the camera networks.”
Another prison rolled out a facial recognition system that detects signs of anger on inmates’ faces, prompting intervention from a prison psychologist.
China is moving fast and aggressively in the field of AI-powered systems of control, setting trends that may soon be adopted by other countries, especially if their economies depend on China.
Who knows how soon what sounds like a dystopian tale half a world away will become everyday reality on the streets of your own city.
AI prison guard is listening
Speaking of global trends: U.S. prisons might adopt AI to monitor inmates, too.
Securus Technologies, a firm that provides inmate calling systems to U.S. prisons, has been using inmates’ calls to train its AI product. The firm’s president Kevin Elder told MIT Technology Review that Securus has been piloting AI tools to monitor inmate conversations in real time for signs of potential crime planning.
“We can point that large language model at an entire treasure trove [of data] to detect and understand when crimes are being thought about or contemplated, so that you’re catching it much earlier in the cycle,” Elder said.
Inmates are normally notified that their conversations with family are being recorded, but not necessarily that the calls’ content is being used to train AI. And in this case, they are effectively paying for it themselves – as opposed to the usual process when companies pay employees or contractors to train LLMs. Another disturbing detail is that Securus has previously been caught recording confidential conversations between inmates and their attorneys.
Obviously, prison is not a place where one can expect much privacy. However, using inmates as free labor in the AI mines might be a little disturbing by the standards of basic humanity, what do we think?..
iCloud patrol
U.S. Border Patrol has tools to break into phones, but if those fail, agents may request data from Apple, Forbes has found.
According to a warrant seen by the reporters, Customs Border Protection (CBP) requested from Apple iCloud account details for two women suspected of smuggling a person into the U.S. from Mexico. It’s unclear whether Apple provided access.
The two women, both green card holders, were crossing the border in April at San Ysidro, California. Another man in the car presented a passport, which agents did not believe belonged to him. All three were taken into custody and CBP tried searching the women’s iPhones with forensic tools by Cellebrite, on which the agency has spent millions. However, “complete mobile device data acquisition of the iPhones were unsuccessful,” the warrant says, so CBP filed a search warrant application to Apple.
U.S. law enforcement agencies frequently file warrants for access to iCloud accounts, which can contain backups of iMessage and WhatsApp messages, as well photos and location history. According to Apple’s latest transparency report, between July and December 2024, the company received government requests related to national security investigations about the content of more than 76,000 accounts.
New Orleans on air again
New Orleans’ network of surveillance cameras will restore city police’s live access to its footage, Verite News reports. The nonprofit in charge of the program, Project NOLA, will also keep providing facial recognition alerts to the police.
Project NOLA, a local nonprofit that maintains a network of security cameras around the city, has been sharing footage with the police for the past 15 years.
The initiative was caught in controversy this spring after the Washington Post found that Project NOLA has been sending the NOPD live facial recognition alerts without proper disclosure.
Then in the fall, the police provided a homicide video from Project NOLA’s cameras to a local TV show, prompting a new wave of criticism. After that, the nonprofit said it would stop sharing live remote access with the police but would still allow officers to review footage at designated monitoring hubs. In October, the City Council passed an ordinance prohibiting the public sharing of unredacted graphic or sensitive imagery.
After meeting with New Orleans Police Department Superintendent Anne Kirkpatrick in early November, Project NOLA’s founder Bryan LaGarde announced the remote sharing of live footage would be resumed. And while live facial-recognition sharing has been put on hold, the organization will continue providing alerts to the police manually.
Voting at gunpoint
The U.N. human rights office has warned that the military junta currently in charge of Myanmar would use the upcoming election to track down its opponents. Reuters reports that voting machines used in Myanmar don’t allow people to leave the ballot empty or spoil it, and the ruling regime is forcing the population to vote.
James Rodehaver, head of the Myanmar team for Office of the High Commissioner for Human Rights (OHCHR), said the Office received reports that locals are being forced to attend military training sessions on how to use the voting machines in the areas not yet under the full control of the junta.
“There’s a real worry that this electronic surveillance technology is going to be used to monitor how people are voting,” Rodehaver said. OHCHR has also received reports of displaced people being ordered by the military to return to their villages to vote, Rodehaver said.
Since 2021, when the junta overthrew the government led by Nobel laureate Aung San Suu Kyi, the country has been embroiled in a civil war.
WhatsApp for diplomats
For Russians abroad, it’s getting harder to connect to their families at home. The country, which is waging war against Ukraine, is also in a battle with Western messaging apps.
On Thursday, Russia officially began blocking FaceTime and Snapchat, The Guardian reports. FaceTime was not particularly popular among Russians before – people mostly used Telegram and WhatsApp for calls. But since the country started blocking calls in WhatsApp and Telegram in August, FaceTime emerged as one of the alternatives. According to anecdotal evidence and my own experience, people in Russia still can use all three messengers for calls if they use a VPN. Signal has been blocked in Russia since August 2024.
In the meantime, a curious detail about Russia’s looming total ban on WhatsApp surfaced this week: turns out, the messaging app has been used by Trump’s envoy Steve Witkoff and Putin’s aide Yuri Ushakov in their recent communication. That can be the reason Russia’s censorship agency is preparing to fully block WhatsApp in Russia – not just calls – RBK reports.
Witkoff visited Russia this week to confer with Ushakov about the conditions for ending the war in Ukraine – an endeavor which has not resulted in any progress towards peace so far. The only tangible result has been the leaked conversation between Witkoff and Ushakov published by Bloomberg, offering an insight into the general vibe of the talks (very friendly).
In a comment to a Russian newspaper Kommersant, Ushakov suggested that the leak might have come from WhatsApp, which the two used. “There are occasional conversations on WhatsApp, which, generally speaking, someone somehow might intercept, apparently,” Ushakov said.
Reminds you of the Signalgate, when Pentagon’s top brass used Signal to discuss airstrikes on Yemen and occasionally added The Atlantic’s editor-in-chief to the group chat? Well, at leat Signal is not forbidden on the U.S. (yet).
And as for WhatsApp, Russian authorities have been trying to replace the country’s most popular messaging apps, Telegram and WhatsApp, with the home-brewed messenger Max – developed by social network company VK (“the Russian Facebook”), and widely believed to be a law enforcement surveillance tool.
The previously stated justification for targeting WhatsApp and Telegram was a surge in scams carried out by fraudsters contacting Russians from abroad. Now, it turns out, WhatsApp is also a threat to diplomatic secrecy?..
So, probably don’t send any state secrets to your friends in Russia over WhatsApp. Just in case.
Rich enough to pay more
On a lighter note: did you know some online stores adjust the price of their goods for you based on what they know about you? Not anymore in New York: the state has just passed a law requiring retailers that algorithmically set prices using customers’ personal data to disclose that in a “clear and conspicuous” way. The main factor in “surveillance pricing” is your location, Wired writes.
***
And that’s all from me this week, folks!
Stay vigilant.
Anna


What's striking is how the AI prison monitoring extends beyond just tracking movement to analyzing facial expressions for anger. The layer of behaviorl prediction feels like someting straight out of speculative fiction, yet it's operational now. The global influence angle is worth emphasizing too, because what gets normalized in one market often spreads acros others through trade relationships and tech partnerships.