When Algorithms in the Hands of the State

In the late 18th century, Jeremy Bentham developed the concept of the Panopticon, a prison design that allowed one guard to observe all prisoners. The inmates knew they could be watched but had no way of knowing if they were being monitored right now. Foucault used this design as a metaphor for the constant surveillance of modern society.

Today, algorithms and digital surveillance have gone far beyond Foucault’s vision. We no longer need guards working in watchtowers, but instead use algorithms to monitor our online behavior, track our whereabouts, analyze our social interactions, and even predict our behavior.

Surveillance and public safety

Can you imagine walking down the street, completely unaware that your every step is being recorded by a camera? You are not only recorded but analyzed by algorithms that track your movements, recognize your face, and store this data indefinitely. This is not science fiction; it’s happening right now in cities around the world.

Sometimes it makes me feel safe. As I read about China’s use of facial recognition surveillance technology to catch one fugitive at a 60,000-person concert1, I was like, “Okay! I want to be monitored like that when I’m walking alone at night.” Another reason for me to link this technology to security issues is that last year when my cell phone was stolen, I tried to find the thief by finding the surveillance. But my friend told me that the Netherlands doesn’t have cameras everywhere like China, and facial recognition is illegal here. I couldn’t help but think how great it would be if a society had enough cameras to deter even thieves from stealing.

But beyond the security concerns, there is an even more chilling factor. What happens if the algorithm is used by an authority for the wrong purpose?

Information censorship and social control

Algorithms can be an aid to authoritative governance as well as a means of maintaining power, silencing dissent and controlling citizens’ lives.

In the past, bureaucracies have been slow and inefficient in handling paperwork, taxes, and public services. But with the help of algorithms, their services have become faster, with better results and less human error. However, in digitally centralized states, algorithms are also a powerful segment of censorship by authoritarian regimes. Machine learning algorithms can automatically detect and filter politically sensitive content, such as criticism of the government or organizing calls for protest. This happens on a large scale and allows regimes to control online speech more effectively.

In China, various social media platforms use algorithms to censor sensitive content, such as that of the Tiananmen Square massacre or the Hong Kong protests. As well as the facial recognition is used to monitor the Uyghur Muslim population in Xinjiang through features such as face and pupils. In Iran, the government surveils, alters, and destroys information in people’s phones through an algorithmic system called SIAM. It can break telephone encryption, trace the activities of individuals or large groups, generate summary data on who is talking, where and when, and intercept it if necessary. This has aided the government’s crackdown on protests.2 In Russia, the SORM (The System for Operative Investigative Activities) system is also used to help the government block communications and collect large-scale data to identify potential threats to the regime.3

And even in democracies, there are risks to algorithm-based governance. In the United States criminal justice system, for example, algorithms are used as a way to evaluate risk assessments against defendants. 4 It is based on scoring individuals based on factors such as criminal history, employment and community characteristics, which may influence bail decisions or sentencing. While the algorithm itself is neutral and impartial, the data it sources is based on racial discrimination and historical inequality in the U.S. justice system. As a result, it tends to overestimate the risk of recidivism for blacks and underestimate the risk for whites.

And now I’m wondering, how do we make sure that the algorithms we create are for the benefit of the population rather than controlling it?

Reference

  1. Wang, Amy. “A Suspect Tried to Blend in With 60,000 Concertgoers. China’s Facial-recognition Cameras Caught Him.” The Washington Post, April 13, 2018. https://www.washingtonpost.com/news/worldviews/wp/2018/04/13/china-crime-facial-recognition-cameras-catch-suspect-at-concert-with-60000-people/. ↩︎
  2. Biddle, Sam, Murtaza Hussain, and Sam Biddle. “Hacked Documents: How Iran Can Track and Control Protesters’ Phones.” The Intercept, October 28, 2022. https://theintercept.com/2022/10/28/iran-protests-phone-surveillance/. ↩︎
  3. Soldatov, Andrei, and Irina Borogan. “Russia’S Surveillance State.” CEPA, April 15, 2023. https://cepa.org/article/russias-surveillance-state/. ↩︎
  4. Outreach, Research. “Justice Served? Discrimination in Algorithmic Risk Assessment.” Research Outreach, November 8, 2023. https://researchoutreach.org/articles/justice-served-discrimination-in-algorithmic-risk-assessment/. ↩︎