I’ve been seeing some people worried about how the European Union is trying to handle the situation. The influence and place that social networks have in our lives has become more and more evident, and discussion on online safety has grown alongside it. In a world where we realize that companies are piling up data, sometimes as if it were more valuable than money itself, at every step of our online journey. Is the European Union trying to protect every citizen with a bit too much of a firm hand, or does its capitalist background show in the way it wants to collect data?
One of the biggest recent laws about online platforms is the DSA. The Digital Services Act (DSA) is a set of rules for online platforms that applies to all online services like hosting providers, marketplaces and social networks. The big platforms with more than 45 million users in the EU like TikTok, Instagram, YouTube and X are called Very Large Online Platforms, or VLOPs. They have stricter rules because of their impact.
The platforms must set up easy systems for users to report illegal content and act on it quickly. Users can appeal if they think content was unfairly removed. The platforms must also be more transparent about how their algorithms work and publish regular transparency reports. They cannot run targeted ads based on profiling children and there must be stronger defaults for under 18 year olds. For VLOPs, they must assess and mitigate risks such as the spread of disinformation and harm to mental health.
What is interesting is that each EU country has a Digital Service Coordinator and VLOPs are directly supervised by the European Commission. If platforms don’t comply, the penalty will be up to 6% of their global turnover.
The Digital Services Act really focuses on security, protection of minors and fair competition. The problem is that some legal pieces of content might be wrongly deleted. All of this is a really good plan but costly for small companies.
The next thing I want to reflect on is the “chat control.”
The pending law was first introduced in 2022 but has recently gained attention. While it serves the purpose of preventing child abuse on the internet, it does not seem the fairest way to help with this problem. The proposal has been stalled for years because many EU countries, experts, and civil rights groups oppose it. The reason the debate was awoken again is that Denmark has revived the proposal and a new vote is expected in October 2025.
The proposed “Chat Control” regulation in the EU is a serious threat to online privacy. By requiring companies to scan private messages, even those protected with end-to-end encryption, it could undermine one of the strongest tools we have to keep conversations secure. This kind of mass surveillance risks treating all citizens as potential criminals.
The technology itself is also problematic. Automated scanning can generate false positives, meaning innocent family photos or jokes could be flagged as suspicious. Once such a system is in place, it could easily be expanded to monitor political speech or other lawful activities, creating a dangerous precedent.
Ultimately, Chat Control could become a form of information theft by the EU, since it allows sensitive personal data to be collected and possibly misused. Protecting children is vital, but it should not come at the cost of destroying privacy, weakening encryption, and opening the door to abuse of power.
This could lead to distrust in the EU’s principles, almost like a totalitarian regime, and I personally think it would take away from the core of the internet experience.
Recent Comments