Dystopian future? The neoliberalisation of privacy in a digital age

The 2018 Facebook Cambridge Analytica scandal has led many people to become more conscious about privacy concerns and how much of our personal information is being collected and sold by social media corporations. But are we as individuals actually capable to protect our online privacy, or is this an issue that can only be addressed in collective regulations?

The story of how Cambridge Analytica, a data analytics firm that was employed by both the Trump election team as well as pro-Brexit campaigners, stole private data from over 50 million Facebook users broke out after a whistleblower, Christopher Wylie, came forward to report the massive data breach. This massive scandal managed to put online safety and privacy back on the maps and create awareness for privacy concerns among everyday citizens around the world and politicians alike, showing that privacy laws are often lacking in the digital age. At the same time, there is still often the claim that privacy is a personal issue that we are supposed to guard ourselves, and we are able to make rational, well-informed decisions

It is then not surprising that after the breaking of this story there was a large group of people who wrote think pieces and columns about leaving Facebook, and arguing it’s the only way to protect our data is to opt out of social media, or at least Facebook, leading to the increased popularity of the #deletefacebook hashtag. However whether that actually protects you from having your data taken by Facebook, as Facebook admittingly has been collecting data on non-users as well through what others are posting and them giving permission to use their email, and phone contacts, thus collecting information about people who do not have accounts.

This leads to somewhat of a paradox, by researchers dubbed as the privacy paradox, where on one hand people want to protect their data, but on the other hand they also do not want to miss out on the benefits of social media and online engagement. Gordon Hull describes this paradox in the paper “Successful failure: what Foucault can teach us about privacy self-management in a world of Facebook and big data” (2015), here the author discusses that people do actually engage in measures to protect their privacy, but these privacy measures are unsuccesful because they rely on self-management of privacy settings, which, according to Hull lead to an “underprotection” of privacy as users face several issues when trying to secure their data while visiting a website or being part of an online community. Firstly, users face the issue that they often do not know what they are consenting to, privacy statements are often deliberately kept inaccessible due to their technical language and length, as websites benefit from being able to collect as much data as possible, additionally websites retain the right to change the privacy policy at any time (Hull 2015, 91) . There is also the issue that websites collect data which we might not consider valuable, but might be of value to corporations who analyse trends and use algorithms to see paterns we might be completely oblivious to (there is also the issue of bias in algorithms, which I might dedicate another blogpost to in the future). A third point Hull makes, is that opting out of using these sites is for many also not an option (Hull 2015, 93), especially not in an increasingly digital world, even going so far as that not having a Facebook makes you suspicious to certain employers according to Forbes. This raises the question of who can afford to opt out of sites such as Facebook and for whom it might be essential to find connections and future employment.

How many sites will you no longer be able to visit if you were no longer to accept any cookies, and how much time would it take you to read through every privacy agreement before you acces even one website, how many sites would you actually still visit? Hull documented that it would require 244 hours to read all the privacy policies of all the sites we visit, and this was in 2015. Recent European Union regulation has made it significantly easier to opt out of specific cookies, however many sites are still hosted outside the European Union’s borders and thus do not have to keep to these regulations, making opting out of any cookies significantly more difficult.

Additionally, seeing privacy only as a matter of the individual, ignores the social impact of what our data means, and how it can be used to manipulate us. The case of Cambrigde Analytica has shown that at the hand of our data they were able to target people, and create bubbles in which these people could be targeted to sway their opinion to a specific political candidate or (political) cause by targetting them with specific advertisements or promotial materials, and create bubbles of information that become increasingly hard to escape from.

Online privacy is still predominantly seen as a personal issue, however in this article I hope to have made a point that we, as users, often lack the insights that are needed to be able to make informed decisions of what data we give away and what purpose this data serves, and we thus need to find a new way of looking at online privacy, instead of the opt-out model that we currently ascribe to. Instead we should push for stricter privacy regulations to protect users from having their data taken that don’t revolve around individuals all becoming data experts, but instead on regulating corporations on which data they can collect from users in general.

Hull, Gordon. “Successful Failure: What Foucault Can Teach Us about Privacy Self-management in a World of Facebook and Big Data.” Ethics and Information Technology 17, no. 2 (2015): 89-101.

some additional sources I found quite interesting:
The Key to Safety Online Is User Empowerment, Not Censorship
Trading privacy for survival is another tax on the poor