Defining the scope of AI ADM system risk assessment
| Authors |
|
|---|---|
| Publication date | 2022 |
| Host editors |
|
| Book title | Research handbook on EU data protection law |
| ISBN |
|
| ISBN (electronic) |
|
| Series | Research Handbooks in European Law |
| Chapter | 16 |
| Pages (from-to) | 405-434 |
| Number of pages | 30 |
| Publisher | Cheltenham: Edward Elgar Publishing |
| Organisations |
|
| Abstract |
Guidance documents for technology governance and data protection often use broad terms such as Artificial Intelligence (AI). This is problematic; the term ‘AI’ is inherently ambiguous, and it is difficult to tease out the nuances in the ‘grey areas’ between AI techniques and/or automated decision-making (ADM) processes. We use four illustrative examples to demonstrate that the categorisation gives only partial information about each system’s risk profile. We argue that organisations should adopt risk-oriented approaches to identify system risks that extend beyond technology classification as AI or non-AI. Organisational governance processes should entail a more holistic assessment of system risk: rather than relying on ‘top-down’ categorisations of the technologies employed, they should apply a ‘bottom-up’ risk identification process that enables a more effective identification of appropriate controls and mitigation strategies.
|
| Document type | Chapter |
| Language | English |
| Published at | https://doi.org/10.4337/9781800371682.00025 |
| Downloads |
Chapter 16_ Defining the scope of AI ADM system risk assessment
(Final published version)
|
| Permalink to this page | |