Hörsal UB.A.230 - Lindellhallen 3 & Online on Zoom
AI systems are commonly believed to be able to aid in more objective decision-making and, eventually, to make objective decisions of their own. However, such belief is riddled with fallacies, which are based on an overly simplistic approach to organizational decision-making.
Based on an ethnography of the Dutch police, we demonstrate that making decisions with AI requires practical explanations that go beyond an analysis of the computational methods used to generate predictions, to include an entire ecology of unbounded, open-ended interactions and interdependencies. In other words, explaining AI is ecological. Yet, this typically goes unnoticed.
We argue that this is highly problematic, as it is through acknowledging this ecology that we can recognize that we are not, and never will be, making objective decisions with AI. If we continue to ignore the ecology of explaining AI, we end up reinforcing, and otentially even further stigmatizing, existing societal categories.