Citation

Discussion Paper Details

Please find the details for DP17298 in an easy to copy and paste format below:

Full Details   |   Bibliographic Reference

Full Details

Title: Aligned with Whom? Direct and social goals for AI systems

Author(s): Avital Balwit and Anton Korinek

Publication Date: May 2022

Keyword(s): Agency theory, AI governance, delegation, direct alignment and social alignment

Programme Area(s): Industrial Organization, Macroeconomics and Growth and Public Economics

Abstract: As artificial intelligence (AI) becomes more powerful and widespread, the AI alignment problem - how to ensure that AI systems pursue the goals that we want them to pursue - has garnered growing attention. This article distinguishes two types of alignment problems depending on whose goals we consider, and analyzes the different solutions necessitated by each. The direct alignment problem considers whether an AI system accomplishes the goals of the entity operating it. In contrast, the social alignment problem considers the effects of an AI system on larger groups or on society more broadly. In particular, it also considers whether the system imposes externalities on others. Whereas solutions to the direct alignment problem center around more robust implementation, social alignment problems typically arise because of conflicts between individual and group-level goals, elevating the importance of AI governance to mediate such conflicts. Addressing the social alignment problem requires both enforcing existing norms on their developers and operators and designing new norms that apply directly to AI systems.

For full details and related downloads, please visit: https://cepr.org/active/publications/discussion_papers/dp.php?dpno=17298

Bibliographic Reference

Balwit, A and Korinek, A. 2022. 'Aligned with Whom? Direct and social goals for AI systems'. London, Centre for Economic Policy Research. https://cepr.org/active/publications/discussion_papers/dp.php?dpno=17298