
Artificial intelligence is shaping our world faster than any other technology before it. But while the U.S. lets companies innovate freely and China embraces state control, the European Union has chosen another path: regulation. With the AI Act, ratified in March 2024, The EU has become the first major economical actor in the world to draw a legal line around how AI should and shouldn’t be used.
What I find very interesting is that it is the first comprehensive law in the world regarding AI. It really sets global standards for artificial intelligence and places the European Union as a true pioneer in the field.
This law classifies AI systems based on a risk-based approach. At the highest level, unacceptable risks include social scoring systems or real-time facial recognition. These are banned by the EU because they represent major breaches of privacy and are considered morally unacceptable — which I completely agree with.
High-risk AI covers sensitive areas such as health, education, recruitment, justice, or critical infrastructure. These systems are subject to strict oversight: regular audits, transparency obligations, and strong data security requirements.
Limited-risk AI includes chatbots and generative AI (like ChatGPT or Midjourney). Here, users must be clearly informed when they are interacting with a machine or when content has been artificially generated. These technologies are also monitored closely, given the influence they will have on society in the future.
Finally, minimal-risk AI systems such as video games or AI-powered filters are considered harmless and remain free to use, although some potential dangers could still emerge from their misuse.
What I find interesting is that while the AI Act is clearly about safety, ethics, and human rights, it also actively encourages innovation. One way it does this is through regulatory sandboxes: supervised environments where startups and companies can test AI systems in real or simulated conditions, with some legal flexibility. This ensures new ideas can grow without being crushed by complex rules, while regulators learn alongside innovators.
However, one major gap stands out. The AI Act does not seriously address the environmental impact of AI. Training large AI models consumes massive amounts of energy and produces significant carbon emissions, yet sustainability barely features in the regulation. For a law that aims to define what “ethical AI” means, isn’t environmental responsibility part of the picture too? Some see this as a missed opportunity for Europe to lead not only in safe and fair AI, but also in green AI.
What is have found in my research reading a great article “The Dawn of Regulated Al: Analyzing the European Al Act and its Global Impact” from Kalojan Hoffmeister was that
“It is common ground that AI will have a significant economic impact on global productivity. A recent 2023 research indicates that AI could add the equivalent of USD 2.6 trillion to USD 4.4 trillion annually to the global economic output around the world.? A previous study found that the improved productivity could contribute up to USD 15.7 trillion on the global economy in 2030”
I guess it is reason enough for the European Union to keep AI growing in those sandboxes
The AI Act shows Europe’s determination to shape artificial intelligence according to its values: safety, ethics, and human rights. It even creates spaces for innovation to flourish, like regulatory sandboxes. But there is one tension the law does not solve — and maybe cannot solve: the environmental footprint of AI. Training and running these massive models requires enormous amounts of energy, and no amount of regulation can make that footprint vanish.

Recent Comments