Algorithmic surveillance and the political life of error
| Authors |
|
|---|---|
| Publication date | 2021 |
| Journal | Journal for the History of Knowledge |
| Article number | 10 |
| Volume | Issue number | 2 | 1 |
| Number of pages | 13 |
| Organisations |
|
| Abstract |
Concerns with errors, mistakes, and inaccuracies have shaped political debates about what technologies do, where and how certain technologies can be used, and for which purposes. However, error has received scant attention in the emerging field of ignorance studies. In this article, we analyze how errors have been mobilized in scientific and public controversies over surveillance technologies. In juxtaposing nineteenth-century debates about the errors of biometric technologies for policing and surveillance to current criticisms of facial recognition systems, we trace a transformation of error and its political life. We argue that the modern preoccupation with error and the intellectual habits inculcated to eliminate or tame it have been transformed with machine learning. Machine learning algorithms do not eliminate or tame error, but they optimize it. Therefore, despite reports by digital rights activists, civil liberties organizations, and academics highlighting algorithmic bias and error, facial recognition systems have continued to be rolled out. Drawing on a landmark legal case around facial recognition in the UK, we show how optimizing error also remakes the conditions for a critique of surveillance.
|
| Document type | Article |
| Note | In special issue: Histories of Ignorance. |
| Language | English |
| Published at | https://doi.org/10.5334/jhk.42 |
| Downloads |
42-551-2-PB
(Final published version)
|
| Permalink to this page | |
