In AI we trust? Perceptions about automated decision-making by artificial intelligence

Open Access
Authors
Publication date 09-2020
Journal AI & Society
Volume | Issue number 35 | 3
Pages (from-to) 611-623
Organisations
  • Faculty of Social and Behavioural Sciences (FMG) - Amsterdam School of Communication Research (ASCoR)
  • Faculty of Law (FdR)
  • Interfacultary Research
Abstract
Fueled by ever-growing amounts of (digital) data and advances in artificial intelligence, decision-making in contemporary societies is increasingly delegated to automated processes. Drawing from social science theories and from the emerging body of research about algorithmic appreciation and algorithmic perceptions, the current study explores the extent to which personal characteristics can be linked to perceptions of automated decision-making by AI, and the boundary conditions of these perceptions, namely the extent to which such perceptions differ across media, (public) health, and judicial contexts. Data from a scenario-based survey experiment with a national sample (N = 958) show that people are by and large concerned about risks and have mixed opinions about fairness and usefulness of automated decision-making at a societal level, with general attitudes influenced by individual characteristics. Interestingly, decisions taken automatically by AI were often evaluated on par or even better than human experts for specific decisions. Theoretical and societal implications about these findings are discussed.
Document type Article
Language English
Published at https://doi.org/10.1007/s00146-019-00931-w
Downloads
Permalink to this page
Back