Mapping Value(s) in AI: Methodological Directions for Examining Normativity in Complex Technical Systems

Open Access
Authors
Publication date 2022
Journal Sociologica
Volume | Issue number 16 | 3
Pages (from-to) 51-83
Number of pages 33
Organisations
  • Faculty of Humanities (FGw) - Amsterdam Institute for Humanities Research (AIHR) - Amsterdam School for Cultural Analysis (ASCA)
  • Faculty of Law (FdR) - T.M.C. Asser Instituut
  • Faculty of Science (FNWI) - Informatics Institute (IVI)
  • Faculty of Law (FdR) - Leibniz Center for Law (FdR)
Abstract

This paper seeks to develop a multidisciplinary methodological framework and research agenda for studying the broad array of 'ideas', 'norms', or 'values' incorporated and mobilized in systems relying on AI components. We focus on recommender systems as a broader field of technical practice and take YouTube as an example of a concrete artifact that raises many social concerns. To situate the conceptual perspective and rationale informing our approach, we briefly discuss investigations into normativity in technology more broadly and refer to 'descriptive ethics' and 'ethigraphy' as two approaches concerned with the empirical study of values and norms. Drawing on science and technology studies, we argue that normativity cannot be reduced to ethics, but requires paying attention to a wider range of elements, including the performativity of material objects themselves. The method of 'encircling' is presented as a way to deal with both the secrecy surrounding many commercial systems and the socio-technical and distributed character of normativity more broadly. The resulting investigation aims to draw from a series of approaches and methods to construct a much wider picture than what could result from one discipline only. The paper is then dedicated to developing this methodological framework organized into three layers that demarcate specific avenues for conceptual reflection and empirical research, moving from the more general to the more concrete: ambient technical knowledge, local design conditions, and materialized values. We conclude by arguing that deontological approaches to normativity in AI need to take into account the many different ways norms and values are embedded in technical systems.

Document type Article
Language English
Published at https://doi.org/10.6092/issn.1971-8853/15910
Other links https://www.scopus.com/pages/publications/85161326469
Downloads
15910-Article Text-64351-1-10-20230315 (Final published version)
Permalink to this page
Back