Machine learning is a great tool that we use on a daily basis. Even though the first example that would come to mind would be Chatgpt, machine learning-based applications are more common than you might think. If you want to go from point A to B and open Google Maps, the fastest route recommended to you is a product of a machine learning algorithm. We engage with these algorithms on a daily basis and use them in our decision-making, like which route to take, what to do according to the weather, and what to buy; but it is not only us as individuals that use them in their decision-making. Also, large corporations and governments make use of machine-learning algorithms.
One question that we do not know is what these algorithms are really learning. This term for our lack of understanding and comprehension when it comes to machine learning is referred to as a black box.
What is a Black Box?
Black box is a term to refer to a system where only input and output are interpretable, internal working processes are either beyond human reach or beyond human understanding. This can also be referred to as an epistemically opaque system. Epistemic opacity means a lack of transparency or understanding of the underlying processes in this case of a computational system. There is no way of examining the system at work and understanding its logic on how the given input turned into the output presented.
One example to demonstrate the idea of epistemic opacity and the black box would be something we all use daily.
An Example: Weather Prediction Models
Weather prediction models are computational models simulating the complex dynamics of the atmosphere from given inputs such as temperature, pressure, wind speed, and so on. It might appear as a straightforward model, but it is highly complex as there are lots and lots of data points and models used within such prediction models. Due to the sheer volume of data and complexity of computations, it becomes practically impossible for a human to manually follow and understand the steps of such a simulation. According to a report in 2016, a forecast model would need to solve more than 100 million complex equations to produce a single hour of forecast.
One thing we do not know is what the computer has learned and what are the reasons behind making such a prediction. Is it just a glorified pattern recognition based on prior weather events? Did the algorithm manage to learn laws of fluid mechanics? Or can it fully grasp a weather phenomenon like the rain? Does this mean that it would also understand the sensation of getting wet in the rain? These questions about the justification behind weather predictions might seem redundant. One may argue that it does not matter or it is not important it is just a weather prediction.
Another Example: Dutch Child Care Benefit Scandal
An example to demonstrate why understanding the logic of machine learning algorithms would be the recent Dutch childcare benefits scandal. Dutch tax authorities decided to implement a machine learning algorithm to track and identify cases of fraud and create risk profiles. But AI used indicators such as ‘foreign sounding names’, ‘dual nationality’, ‘being low income’ as potential fraud. Human authorities were to late respond this faulty fraud detection and thousands of families went into debt and many ended up in poverty because they were asked to pay back the child benefit money. Some lost their homes or their jobs. Even more than a thousand children have been taken out of their homes and placed in state custody as a result of the accusations.
Understanding the reasoning behind machine learning applications might seem trivial, especially in the example of weather model prediction, but it is a matter of life and death in the example of the Dutch childcare benefit scandal. Even though it seems theoretically impossible as suggested by the ideas of black box and epistemic opacity, it should not mean that we should completely disregard the question of how can we gain a better understanding of the reasonings behind such applications.
References
Amsterdam, U. van. (2023, February 8). “The Dutch childcare benefit scandal shows that we need explainable AI rules.” University of Amsterdam. https://www.uva.nl/en/shared-content/faculteiten/en/faculteit-der-rechtsgeleerdheid/news/2023/02/childcare-benefit-scandal-transparency.html
Dutch childcare benefit scandal an urgent wake-up call to ban racist algorithms. (2021, October 25). Amnesty International. https://www.amnesty.org/en/latest/news/2021/10/xenophobic-machines-dutch-child-benefit-scandal/
Drigo, I. (n.d.). What are weather models and how do they work. WINDY.APP. Retrieved November 13, 2023, from https://windy.app/blog/what-are-weather-models.html
Herderscheê, G. (2021, October 19). Ruim 1.100 kinderen van gedupeerden toeslagenaffaire werden uit huis geplaatst. de Volkskrant. https://www.volkskrant.nl/nieuws-achtergrond/ruim-1-100-kinderen-van-gedupeerden-toeslagenaffaire-werden-uit-huis-geplaatst~baefb6ff/
Humphreys, Paul (2008), The philosophical novelty of computer simulation methods. Synthese 169(3), pp. 615-626.Sylvie, M. (2016, November 3).
NEMEC, L. W., Sophia IN ’T VELD, José GUSMÃO, Marisa MATIAS, Cornelia ERNST, Eric ANDRIEU, Fabio Massimo CASTALDO, Ernest URTASUN, Agnes JONGERIUS, Evin INCIR, Monica SEMEDO, Delara BURKHARDT, Svenja HAHN, Milan BRGLEZ, Pierrette HERZBERGER-FOFANA, Sergey LAGODINSKY, Erik MARQUARDT, Gwendoline DELBOS-CORFIELD, Karen MELCHIOR, Damien CARÊME, Katalin CSEH, Anna Júlia DONÁTH, Elisabetta GUALMINI, Sira REGO, Mónica Silvana GONZÁLEZ, Peter POLLÁK, Mohammed CHAHIM, Manu PINEDA, Diana RIBA I. GINER, Samira RAFAELA, Kim VAN SPARRENTAK, Isabel CARVALHAIS, Marc ANGEL, Nora MEBAREK, Thijs REUTEN, Matjaž. (n.d.). Parliamentary question | The Dutch childcare benefit scandal, institutional racism and algorithms | O-000028/2022 | European Parliament. Retrieved November 22, 2023, from https://www.europarl.europa.eu/doceo/document/O-9-2022-000028_EN.html
Weather forecasting models. Encyclopedia of the Environment.https://www.encyclopedie-environnement.org/en/air-en/weather-forecasting-models/
Thank you so much for your post!
Machine learning and black box concepts are a black magic for me, and it was really nice to read something about it! I agree with you on the matter that we shouldn’t disregard the question of how can we gain a better understanding of the reasonings behind the application based on machine learning. I also think that we should not fully rely on them in situations where they can have a major impact on people’s lives like with the child insurance benefit scandal.