Artificial intelligence (AI) technologies advanced rapidly over the past several years. Governments around the world responded by developing AI strategies. France released its national AI strategy in March 2018, emphasising research funds, ethical issues, and inequality. China stated a goal of being the top AI country by 2030. The EU, Canada, Japan, the Obama administration, the Trump administration, and many others have put forth their own plans (Sutton 2018).
Pessimistic views of the impact of AI on society are widespread. Elon Musk, Stephen Hawking, Bill Gates, and others warn that rapid advances in AI could transform society for the worse. More optimistically, AI could enhance productivity so dramatically that people have plenty of income and little unpleasant work to do (Stevenson 2018). Regardless of whether one adopts a pessimistic or optimistic view, policy will shape how AI affects society.
What is AI?
While the Oxford English Dictionary defines artificial intelligence as “the theory and development of computer systems able to perform tasks normally requiring human intelligence”, the recent excitement is driven by advances in machine learning, a field of computer science focused on prediction. As machine learning pioneer Geoffrey Hinton put it: “Take any old problem where you have to predict something and you have a lot of data, and deep learning is probably going to make it work better than existing techniques”.1 Recent advances in AI can therefore be seen as a drop in the cost of prediction. Because prediction is an important input into decision-making, in recent work we discuss how AI is likely to have widespread consequences as a general purpose technology (Agrawal et al. 2018a, 2018b).
There are two aspects of AI policy.
- First, regulatory policy has an impact on the speed of diffusion of the technology and the form that the technology takes.
- Second, a number of policies focus on mitigating potential negative consequences of AI with respect to labour markets and antitrust concerns.
Policies that will influence the diffusion of AI
Liability rules will also impact the diffusion of AI (Galasso and Luo 2018). Firms will be less likely to invest in the development of AI products in the absence of clear liability rules. Autonomous vehicles provide a useful example. A number of different companies will participate in the development of a self-driving car. If a car gets into an accident, would the sensor manufacturer be liable? The telecommunications provider? The vehicle manufacturer? Or perhaps an AI software firm? Without clear rules on who is liable, all may hesitate to invest. If autonomous vehicles would save lives, should manufacturers of non-autonomous vehicles be held to higher standards than current law requires? This would accelerate diffusion of the safer technology. In contrast, if the increases in liability focus is primarily on newer technology, then diffusion will slow.
In addition, similar to other technologies, advances will be faster with more research support, well-balanced intellectual property law, and the ability to experiment in a safe way.
Policies that address the consequences of AI
A common worry about AI concerns the potential impact on jobs. If machines can do tasks normally requiring human intelligence, will there be jobs left for humans? In our view, this is the wrong question. There are plenty of horrible jobs. Furthermore, more leisure is generally considered to be a positive development, although some have raised concerns about the need to find alternate sources of meaning (Stevenson 2018). The most significant long-run policy issues relate to the potential changes to the distribution of the wealth generated by the widespread use of AI. In other words, AI may increase inequality.
If AI is like other types of information technology, it is likely to be skill-biased. The people who benefit most from AI will be educated people who already are doing relatively well. These people are also more likely to own the machines. Policies to address the consequences of AI for inequality relate to the social safety net. While some have floated relatively radical ideas to deal with the potential increase in inequality – such as a tax on robots and a universal basic income – the AI context is not unique in weighing the costs and benefits of social programmes from progressive taxation to universal healthcare.
In the shorter run, if AI diffuses widely, the transition could mean temporary displacement for many workers. Acemoglu and Restrepo (2018) emphasise a short- and medium-term mismatch between skills and technology. This means that policy preparation in advance of the diffusion of AI should consider both business cycles and education policy. Technology-driven layoffs concentrated in location and time are not unique to AI. They were a feature of factory automation and the mechanization of farming. For education policy, there are many open questions. Should we emphasise social skills and the humanities if machines increasingly are able to do technology-related prediction tasks? Should the education system evolve to focus more on adults? How do the skills needed as AI diffuses differ from the skills currently provided through the education system?
Another policy question around the diffusion of AI relates to whether it will lead to monopolisation of industry. The leading companies in AI are large in terms of revenue, profits, and especially market capitalisation (high multiples on earnings). This has led to an increase in antitrust scrutiny of the leading technology firms from governments (particularly the European Commission) and in the press (see, for example, The Economist’s 20 January 2018 cover story, “The new titans, and how to tame them”, and their subsequent story, “The market for driverless cars will head towards monopoly”, on 7 June 2018). Much of this antitrust scrutiny focuses on the role of these firms as platforms, not on their use of AI per se. The feature that makes AI different is the importance of data. Firms with more data can build better AI. Whether this leads to economies of scale and the potential for monopolisation depends on whether a small lead early in the development cycle creates a positive feedback loop and a long-run advantage.
Much of economic policy for AI is simply economic policy. For the diffusion of AI, it resembles innovation policy. For the consequences of AI, it resembles public policy (the social safety net) and competition policy (antitrust). We summarise aspects of economic policy for AI in Table 1.
Table 1 Aspects of economic policy for artificial intelligence
Although AI is like other technologies in many respects, it is unusual in a few important dimensions. Specifically, AI is both a general purpose technology (GPT) – i.e. it has a wide domain of applications – as well as an ‘invention of a method of invention’ (IMI) (Cockburn et al., 2018; Agrawal et al. 2018). Cockburn et al. assert that “… the arrival of a general purpose IMI is a sufficiently uncommon occurrence that its impact could be profound for economic growth and its broader impact on society.” They assemble and analyse the corpus of scientific papers and patenting activity in AI, and provide evidence consistent with the characterisation of machine learning as both a GPT and IMI.
The implication concerns the returns to investments in AI policy design. Due to the breadth of applications, the cost of suboptimal policy design will likely be significantly higher than with other technologies – or the benefits of optimal policy greater. Furthermore, the returns to investments in policy design are not only a function of the directeffects, where AI “directly influences both the production and the characteristics of a wide range of products and services”, but also the indirecteffects because “AI also has the potential to change the innovation process itself, with consequences that may be equally profound, and which may, over time, come to dominate the direct effect” (Cockburn et al. 2018).
Authors’ note: The points we raise in this column are based on Agrawal et al. (2018a), which in turn builds on discussions at the 2017 NBER Conference on the Economics of AI in Toronto and the associated conference volume (Agrawal et al. 2018c).
Acemoglu, D, and P Restrepo (2018), “Artificial Intelligence, Automation and Work”, in A Agrawal, J Gans, and A Goldfarb (eds), The Economics of Artificial Intelligence: An Agenda, University of Chicago Press.
Agrawal, A, J Gans, and A Goldfarb (2018a), “Economic Policy for Artificial Intelligence”, NBER Working Paper 24690.
Agrawal, A, J Gans, and A Goldfarb (2018b), Prediction Machines: The Simple Economics of Artificial Intelligence, Harvard Business School Press.
Agrawal, A, J Gans, and A Goldfarb (eds) (2018c), The Economics of Artificial Intelligence: An Agenda, University of Chicago Press.
Agrawal, A, J McHale, and A Oettl (2018), “Finding Needles in Haystacks: Artificial Intelligence and Recombinant Growth”, in A Agrawal, J Gans, and A Goldfarb (eds), The Economics of Artificial Intelligence: An Agenda, University of Chicago Press.
Cockburn, I, R Henderson, and S Stern (2018), “The Impact of Artificial Intelligence on Innovation”, in A Agrawal, J Gans, and A Goldfarb (eds), The Economics of Artificial Intelligence: An Agenda, University of Chicago Press.
Galasso, A, and H Luo (2018), “Punishing Robots: Issues in the Economics of Tort Liability and Innovation in Artificial Intelligence”, in A Agrawal, J Gans, and A Goldfarb (eds), The Economics of Artificial Intelligence: An Agenda, University of Chicago Press.
Goldfarb, A, and C Tucker (2012), “Privacy and Innovation”, in J Lerner and S Stern (eds), Innovation Policy and the Economy, Volume 12, NBER, University of Chicago Press: 65-89.
Sutton, T (2018), “An Overview of AI Strategies”, Medium, 28 June.
Stevenson, B (2018), “AI, Income, Employment, and Meaning”, in A Agrawal, J Gans, and A Goldfarb (eds), The Economics of Artificial Intelligence: An Agenda, University of Chicago Press.
 https://www.youtube.com/watch?v=2HMPRXstSvQ (accessed 22 May 2018).