Algorithmic fog of war: When lack of transparency violates the law of armed conflict

Open Access
Authors
Publication date 2021
Journal Journal of Future Robot Life
Volume | Issue number 2 | 1-2
Pages (from-to) 43–66
Number of pages 24
Organisations
  • Faculty of Law (FdR)
  • Faculty of Law (FdR) - Leibniz Center for Law (FdR)
Abstract
Under international law, weapon capabilities and their use are regulated by legal requirements set by International Humanitarian Law (IHL). Currently, there are strong military incentives to equip capabilities with increasingly advanced artificial intelligence (AI), which include opaque (less transparent) models. As opaque models sacrifice transparency for performance, it is necessary to examine whether their use remains in conformity with IHL obligations. First, we demonstrate that the incentives for automation drive AI toward complex task areas and dynamic and unstructured environments, which in turn necessitates resort to more opaque solutions. We subsequently discuss the ramifications of opaque models for foreseeability and explainability. Then, we analyse their impact on IHL requirements from a development, pre-deployment and post-deployment perspective.We find that while IHL does not regulate opaque AI directly, the lack of foreseeability and explainability frustrates the fulfilment of key IHL requirements to the extent that the use of fully opaque AI could violate international law. States are urged to implement interpretability during development and seriously consider the challenging complication of determining the appropriate balance between transparency and performance in their capabilities
Document type Article
Language English
Published at https://doi.org/10.3233/FRL-200019
Downloads
Permalink to this page
Back