News Recommenders and Cooperative Explainability Confronting the contextual complexity in AI explanations
| Authors | |
|---|---|
| Publication date | 2020 |
| Series | Paper Kenniscentrum Data & Maatschappij |
| Number of pages | 11 |
| Publisher | KU Leuven, Centre for IT & IP Law |
| Organisations |
|
| Abstract |
Artificial Intelligence (AI) needs to be explainable. This is a key objective advanced by the European Commission (and its high-level expert group) throughout its AI policy, the Council of Europe, and a rapidly growing body of academic scholarship in different disciplines, from Computer Sciences to Communication Sciences and Law. This interest in explainability is in part fuelled by pragmatic concerns that some form of understanding is necessary for AI’s uptake (and therefore economic success). But on a more fundamental level, there is a recognition that explainability is necessary to understand and manage the societal shifts AI triggers, and to ensure the continued agency of the individuals, market actors, regulators, and societies confronted with AI.
|
| Document type | Report |
| Language | English |
| Published at | https://www.ivir.nl/nl/publicaties/news-recommenders-and-cooperative-explainability-confronting-the-contextual-complexity-in-ai-explanations/ |
| Other links | https://data-en-maatschappij.ai/en/publications/paper-news-recommenders-and-cooperative-explainability |
| Downloads |
Visiepaper-explainable-AI-final
(Final published version)
|
| Permalink to this page | |
