Performance or Explainability? A Law of Armed Conflict Perspective
| Authors | |
|---|---|
| Publication date | 2023 |
| Host editors |
|
| Book title | Artificial Intelligence and Normative Challenges |
| Book subtitle | International and Comparative Legal Perspectives |
| ISBN |
|
| ISBN (electronic) |
|
| Series | Law, Governance and Technology series |
| Event | international conference on Artificial Intelligence and Normative Challenges |
| Pages (from-to) | 255-279 |
| Publisher | Cham: Springer |
| Organisations |
|
| Abstract |
Machine learning techniques lie at the centre of many recent advance-ments in artificial intelligence (AI), including in weapon systems. While powerful, these techniques utilise opaque models whose internal workings are generally quite difficult to explain, which necessitated the development of explainable AI (XAI). In the military domain, both performance and explainability are important and legally required by international humanitarian law (IHL). In practice, however, these two desiderata are in conflict, as improving explainability may involve paying an opportunity cost in performance and vice versa. It is unclear how IHL requires States to address this dilemma. In this article, we attempt to operationalise normative IHL requirements in terms of P (performance) and X (explainability) to derive qualitative guidelines for decision-makers on this issue. We first explain the explainability-performance trade-off, what causes it, and what its consequences are. Then, we explore relevant IHL principles that include P and X as requirements, and develop four tenets derived from these principles. We demonstrate how IHL prescribes minimum values for both P and X, but that once these values are achieved, P should be prioritised over X. We conclude by formulating a general guideline and provide an example of how this would impact model choice.
|
| Document type | Chapter |
| Language | English |
| Published at | https://doi.org/10.1007/978-3-031-41081-9_14 |
| Permalink to this page | |
