Discussion paper

DP18517 The risks of risk-based AI regulation: taking liability seriously

The development and regulation of multi-purpose, large “foundation models” of AI seems to have reached a critical stage, with major investments and new applications announced every other day. Some experts are calling for a moratorium on the training of AI systems more powerful than GPT-4. Legislators globally compete to set the blueprint for a new regulatory regime. This paper analyses the most advanced legal proposal, the European Union’s AI Act currently in the stage of final “trilogue” negotiations between the EU institutions. This legislation will likely have extra-territorial implications, sometimes called “the Brussels effect”. It also constitutes a radical departure from conventional information and communications technology policy by regulating AI ex-ante through a risk-based approach that seeks to prevent certain harmful outcomes based on product safety principles. We offer a review and critique, specifically discussing the AI Act’s problematic obligations regarding data quality and human oversight. Our proposal is to take liability seriously as the key regulatory mechanism. This signals to industry that if a breach of law occurs, firms are required to know in particular what their inputs were and how to retrain the system to remedy the breach. Moreover, we suggest differentiating between endogenous and exogenous sources of potential harm, which can be mitigated by carefully allocating liability between developers and deployers of AI technology.

£6.00
Citation

Kretschmer, M, T Kretschmer, A Peukert and C Peukert (2023), ‘DP18517 The risks of risk-based AI regulation: taking liability seriously‘, CEPR Discussion Paper No. 18517. CEPR Press, Paris & London. https://cepr.org/publications/dp18517