Irresponsible implementation

With the capabilities of (generative) AI on the rise, individuals and companies have adapted and changed their workflow to fully exploit the benefits AI offers. Companies especially, being profit driven and always looking for ways to cut cost have jumped on the always sought after ‘technological innovation’. In this rush to beat their competitors, safety concerns are sometimes ignored for the greater good of the compagnie, it’s profits.

But what if the greater good of the organisation is literately ‘the greater good’. In that case caution and gradual steps would probably be more fitting. However this turns out to not always be the case. Last Wednesday NOS reported that the Dutch government is rapidly implementing AI, but often not considering the implications and risks enough.

From fraud detection systems used by the Dutch Tax Authority (Belastingdienst) to the experimental use of robotic dogs by the Ministry of Justice, AI applications are being integrated into various government functions. However, a recent investigation by the Dutch Court of Audit (Algemene Rekenkamer) has raised serious concerns about the risks associated with this widespread use of AI, as well as the lack of transparency and oversight.

In this blog, I will explore how the Dutch government uses AI, the potential benefits and risks, and the need for greater transparency and regulation.

Efficiency and innovation

The Dutch government currently uses or is testing 433 AI systems, according to the Algemene Rekenkamer. These systems span a wide range of applications, from automating meeting minutes to more sensitive areas, such as identifying individuals at risk of financial trouble or detecting fraud. I’ll give two examples mentioned in the article of the NOS.

The Dutch police uses an AI tool on their fraud-reporting website. This ‘Slimme Keuzehulp’ makes it easier for citizens to navigate the complicated forms required to submit a report.

The Dutch Tax Authority and Food and Consumer Product Safety Authority (NVWA) employ AI models to predict where the likelihood of fraud or tax evasion is highest, allowing for targeted investigations.

These examples show that AI holds great promise for improving the efficiency and effectiveness of public services. AI can help detect patterns that human employees might miss, automate routine tasks, and even make predictive judgments that allow agencies to allocate resources more strategically. In a world where governments are expected to do more with less, AI offers a path forward.

Lack of transparency and accountability

Despite its benefits, the growing use of AI by the Dutch government is not without its problems. One of the most alarming findings from the Court of Audit’s report is that more than half of these AI systems are being used without adequate assessment of their risks. This raises serious concerns about the potential for misuse, discrimination, and privacy violations.

To make matters worse, most used AI systems are not properly documented and it is therefore impossible to trace back what decisions and actions might be based on AI hallucinations.

While the European Union does require systems that are high-risk to be properly documented, the Court of Audit’s suggest that this is an incentive for government agencies to classify their AI systems as “low risk” in order to avoid stringent regulatory requirements. This behaviour might result in high-risk and high-impact systems to be under regulated and therefore dangerous.

Pressing risks

Discrimination and bias in AI systems are common because it’ll take over biases existent in the dataset it is trained on. A notorious scandal in the Netherlands, ‘the toeslagenaffaire’, had just this problem, where it disproportionately targeted minority families, and therefore exacerbating existing social inequalities. AI might be good in seeing correlation, but this isn’t always causation.

Another concern is privacy. AI systems need large datasets to work. With the government having much sensitive personal information about it’s citizens, it might be undesirable to have an somewhat untested AI looking through that data. This is especially difficult in areas such as fraud detection, where personal financial data is necessary for analyses, but might not be handled with the care it deserves.

As AI systems become more integrated into critical government functions, the potential for cyberattacks increases. AI systems could be hacked, manipulated, or otherwise compromised, leading to dangerous outcomes in areas such as public safety or financial oversight.

Finally, governments are expected to be held accountable for their actions. A well know problem with AI is the “black box” problem. The system makes decisions that aren’t even fully understood by their developers. This lack of transparency makes accountability impossible. Who is to blame for mistakes? The official using the AI? The developer? The AI itself?

The solution?

The solution might be quite simple. The officials using AI need to be made aware of it’s dangers. They will need to be able to weigh results and judge it’s validity. To check this, there needs to be a way for citizens to not only see the results, but also learn about the process. What programs where used? In what steps of the discission making?

Conclusion

AI presents both opportunities and challenges for the Dutch government. While AI can make government services more efficient and responsive, its use must be carefully regulated to avoid the risks of bias, discrimination, and privacy violations. Transparency and accountability are crucial in ensuring that AI serves the public good rather than undermining it. Only by addressing these concerns can we ensure that AI is used ethically and responsibly in the public sector.