Competition and cooperation in artificial intelligence standard setting Explaining emergent patterns

Open Access
Authors
Publication date 09-2023
Journal Review of Policy Research
Volume | Issue number 40 | 5
Pages (from-to) 781-810
Number of pages 30
Organisations
  • Faculty of Social and Behavioural Sciences (FMG) - Amsterdam Institute for Social Science Research (AISSR)
Abstract
Efforts to set standards for artificial intelligence (AI) reveal striking patterns: technical experts hailing from geopolitical rivals, such as the United States and China, readily collaborate on technical AI standards within transnational standard-setting organizations, whereas governments are much less willing to collaborate on global ethical AI standards within international organizations. Whether competition or cooperation prevails can be explained by three variables: the actors that make up the membership of the standard-setting organization, the issues on which the organization's standard-setting efforts focus, and the “games” actors play when trying to set standards within a particular type of organization. A preliminary empirical analysis provides support for the contention that actors, issues, and games affect the prospects for cooperation on global AI standards. It matters because shared standards are vital for achieving truly global frameworks for the governance of AI. Such global frameworks, in turn, lower transaction costs and the probability that the world will witness the emergence of AI systems that threaten human rights and fundamental freedoms.
Document type Article
Language English
Published at https://doi.org/10.1111/ropr.12538
Downloads
Permalink to this page
Back