A peek inside two black boxes an experiment with explainable artificial intelligence and IPCC leadership
| Authors |
|
|---|---|
| Publication date | 04-2024 |
| Journal | International Journal of Digital Humanities |
| Volume | Issue number | 6 | 1 |
| Pages (from-to) | 45-69 |
| Organisations |
|
| Abstract |
In this paper, we devise a machine-learning approach to tackle the complex task of investigating leaders in a multi-national organisation: the Intergovernmental Panel on Climate Change. The difficulty of this task lies in the impossibility to spell out the characteristics that define leadership in a complex and highly distributed organization, endowed with a hybrid mission at the interface between science and politics. To bypass this difficulty, we start from a sample of formal organisational leaders defined by the fact of having been officially nominated for the Bureau of the IPCC – among the highest positions in the organisation. A series of anomaly-detection techniques are used to identify IPCC contributors that are or might be Bureau candidates. We find that we can construct a precise albeit implicit model of IPCC leadership despite its social and political complexity. We then suggest various explainable AI methods to investigate why the model has selected members of the IPCC as Bureau candidates. Our analysis of the AI model and of its errors suggest interesting findings about asymmetries in the data and in the IPCC as well as shortcomings of the techniques we employed.
|
| Document type | Article |
| Note | In special issue: Reproducibility and Explainability in Digital Humanities |
| Language | English |
| Published at | https://doi.org/10.1007/s42803-023-00080-z |
| Downloads |
s42803-023-00080-z
(Final published version)
|
| Permalink to this page | |
