In the late 18th century, Jeremy Bentham developed the concept of the Panopticon, a prison design that allowed one guard to observe all prisoners. The inmates knew they could be watched but had no way of knowing if they were being monitored right now. Foucault used this design as a metaphor for the constant surveillance of modern society.
Today, algorithms and digital surveillance have gone far beyond Foucault’s vision. We no longer need guards working in watchtowers, but instead use algorithms to monitor our online behavior, track our whereabouts, analyze our social interactions, and even predict our behavior.
Surveillance and public safety
Can you imagine walking down the street, completely unaware that your every step is being recorded by a camera? You are not only recorded but analyzed by algorithms that track your movements, recognize your face, and store this data indefinitely. This is not science fiction; it’s happening right now in cities around the world.
Sometimes it makes me feel safe. As I read about China’s use of facial recognition surveillance technology to catch one fugitive at a 60,000-person concert1, I was like, “Okay! I want to be monitored like that when I’m walking alone at night.” Another reason for me to link this technology to security issues is that last year when my cell phone was stolen, I tried to find the thief by finding the surveillance. But my friend told me that the Netherlands doesn’t have cameras everywhere like China, and facial recognition is illegal here. I couldn’t help but think how great it would be if a society had enough cameras to deter even thieves from stealing.
But beyond the security concerns, there is an even more chilling factor. What happens if the algorithm is used by an authority for the wrong purpose?
Information censorship and social control
Algorithms can be an aid to authoritative governance as well as a means of maintaining power, silencing dissent and controlling citizens’ lives.
In the past, bureaucracies have been slow and inefficient in handling paperwork, taxes, and public services. But with the help of algorithms, their services have become faster, with better results and less human error. However, in digitally centralized states, algorithms are also a powerful segment of censorship by authoritarian regimes. Machine learning algorithms can automatically detect and filter politically sensitive content, such as criticism of the government or organizing calls for protest. This happens on a large scale and allows regimes to control online speech more effectively.
In China, various social media platforms use algorithms to censor sensitive content, such as that of the Tiananmen Square massacre or the Hong Kong protests. As well as the facial recognition is used to monitor the Uyghur Muslim population in Xinjiang through features such as face and pupils. In Iran, the government surveils, alters, and destroys information in people’s phones through an algorithmic system called SIAM. It can break telephone encryption, trace the activities of individuals or large groups, generate summary data on who is talking, where and when, and intercept it if necessary. This has aided the government’s crackdown on protests.2 In Russia, the SORM (The System for Operative Investigative Activities) system is also used to help the government block communications and collect large-scale data to identify potential threats to the regime.3
And even in democracies, there are risks to algorithm-based governance. In the United States criminal justice system, for example, algorithms are used as a way to evaluate risk assessments against defendants. 4 It is based on scoring individuals based on factors such as criminal history, employment and community characteristics, which may influence bail decisions or sentencing. While the algorithm itself is neutral and impartial, the data it sources is based on racial discrimination and historical inequality in the U.S. justice system. As a result, it tends to overestimate the risk of recidivism for blacks and underestimate the risk for whites.
And now I’m wondering, how do we make sure that the algorithms we create are for the benefit of the population rather than controlling it?
Reference
- Wang, Amy. “A Suspect Tried to Blend in With 60,000 Concertgoers. China’s Facial-recognition Cameras Caught Him.” The Washington Post, April 13, 2018. https://www.washingtonpost.com/news/worldviews/wp/2018/04/13/china-crime-facial-recognition-cameras-catch-suspect-at-concert-with-60000-people/. ↩︎
- Biddle, Sam, Murtaza Hussain, and Sam Biddle. “Hacked Documents: How Iran Can Track and Control Protesters’ Phones.” The Intercept, October 28, 2022. https://theintercept.com/2022/10/28/iran-protests-phone-surveillance/. ↩︎
- Soldatov, Andrei, and Irina Borogan. “Russia’S Surveillance State.” CEPA, April 15, 2023. https://cepa.org/article/russias-surveillance-state/. ↩︎
- Outreach, Research. “Justice Served? Discrimination in Algorithmic Risk Assessment.” Research Outreach, November 8, 2023. https://researchoutreach.org/articles/justice-served-discrimination-in-algorithmic-risk-assessment/. ↩︎
Algorithms are indeed powerful and helpful tools, but the lack of transparency, knowledge and misuse of it, can indeed result in really negative outcomes. Some governments have already (as you stated) misused algorithm-based governance in the physical world, but this can also happen in some digital spaces with their own governance. According to Dogruel, Masur & Joeckel, the algorithm in digital spaces leads to algorithmic decision-making/curation, which can also exploit and manipulate decision-making of users and has a negative effect on the “autonomy in navigating online environment” of users. These digital spaces often ask netizens to e.g. accept their terms & conditions before they can use their digital space, wherein they have (some kind of) governance over their users in this space, often with algorithms. The monitoring, tracking, analyzing and predicting of us by the algorithms can have positive outcomes (like you said about China’s surveillance technology to catch a fugitive), it can definitely also have negative outcomes (as stated by Dogruel, Masur & Joeckel). We definitely should raise our awareness and knowledge of algorithms in the digital and its consequences in the physical world.
I took a course where we talked about ‘Digital Geographies’, so that’s why this piqued my interest and made me want to comment haha.
Dogruel, Leyla, Philipp Masur, and Sven Joeckel. “Development and Validation of an Algorithm Literacy Scale for Internet Users.” Communication Methods and Measures 16, no. 2 (2022): 115–33. doi:10.1080/19312458.2021.1968361.
Nice blog! I think it shows that algorithm’s based governance will always depend on the dataset it’s trained on, and if its flawed/incomplete data or the algorithm is based on bad policy assumptions it could just make the problem it’s trying to solve worse. It also raises the question of who should be accountable when such an algorithm does show biases,