Trusted Flaggers – Angel or Devil?

Whether/how should digital content be regulated?

The question of whether/how content in digital media should be regulated came to my attention again through the blog: “Censorship in the Digital Age: For the Better or Worse“. The topic is extremely interesting and relevant for our time, simply because our lives are increasingly taking place in the digital sphere.

Over the years, the initial idealistic-utopian cyberlibertarian view of the digital sphere as a completely unregulatable space in which everyone should be sovereign due to the absence of power structures has been viewed more and more critically. This can be traced back to the emerging argument that, through such a view, prevailing power structures in the digital sphere (capitalist tech giants from the global North, who can determine pretty much everything. If you are interested, I recommend: “Where does German digitality comes from?” ) are only concealed. Or to the willingness to be able to act nationally in the digital sphere because of the increasing emergence of illegal activities on the platforms. But how are we supposed to regulate this sheer mass?

On this issue, a German news item caught my attention. There, the Bundesnetzagentur has authorised a so-called “trusted flagger” for the first time. The press release in which they announced this sparked great concern about censorship. This article is about these “trusted flaggers” and specifically the question of their role as protectors (angels) or censors (devils) on the internet.

(Pixabay)

Trusted flaggers

But first of all, what are trusted flaggers and what are they there for? The Bundesnetzagentur introduces them as follows:

“Trusted flaggers play a central role in implementing the Digital Services Act to effectively combat illegal content on the internet. These organisations have particular expertise and experience in identifying and reporting illegal content. Platforms are legally obliged to prioritise reports from trusted flaggers and take immediate action, such as deleting the content.” (Self-translated)

They are born in the EU’s Digital Services Act (DSA). It states that they are authorised by the respective Digital Service Coordinator of the member state. For example, Germany has now authorised its first trusted flagger as described. This is the “Meldestelle REspect!” run by a civil society organisation, which has been setting up a service since 2017 to check the legality of hate speech, conspiracy narratives or fake news sent to it and, if necessary, request deletion from the operators.

If a trusted flagger reports a post, the request must be accompanied by a precise and accurate description of why it should be deleted. This is to prevent arbitrary requests. Ultimately, however, it is the operator who decides whether to delete content or not. According to the Digital Services Act, they must do so as soon as it is illegal content, i.e. as soon as it contains information that would be punishable in offline form. For example, child sexual abuse, but also illegal (racist, …) insults.

There are big differences between the different types of trusted flaggers and the actual practice of each one, but now we will at least exchange arguments for and against this form of regulation.

Angel or Devil?

Basically, as with any proposal to regulate digital media, there is a complicated and difficult balancing act. On the one hand, protection against illegal activity should be as good as possible, but at the same time user rights and freedom of expression should be safeguarded. This dilemma is also reflected quite well in the discussion about trusted flaggers.

They have been implemented in the DSA with the hope of creating an effective protector of the digital sphere. This is (fortunately) not a legal vacuum and, given the rapid and mass possibilities for creating and distributing content, the most effective possible regulatory mechanism is needed when illegal content is shared. The trusted flaggers approach can achieve this in a relatively harmonised way at European level.

However, there are also problems associated with this solution. There are not enough criteria as to who may be granted such a status. For example, it is entirely up to the national authorities to decide who gets one, which in theory can of course also benefit less independent, trustworthy organisations. In addition, some believe that there are insufficient security mechanisms or transparency to monitor the actions of trusted flaggers. This means that it cannot be guaranteed that not too much is blocked or that there is a bias.

(Pixabay)

What do you think? Are trusted flaggers a good solution and we just need more security mechanisms for them, or do we need completely different forms of regulation?

References

Appelman, N., & Leerssen, P. (2022). On” Trusted” Flaggers. Yale JL & Tech., 24, 452.

Fischer, D. (2022). The digital sovereignty trick: Why the sovereignty discourse fails to address the structural dependencies of digital capitalism in the global south. Zeitschrift für Politikwissenschaft, 32(2), 383-402.

Özer, B. (2023). Balancing Content Moderation and Human Rights in the Digital Age: Analyzing the Trusted Flaggers Mechanism and the Responsibilities of the Online Platforms under the Digital Services Act. Tilburg.