As a member of the Bank of England’s Monetary Policy Committee, each month I cast a vote on the setting of monetary policy. And across the worlds of business, finance and economic policy, many others take key economic decisions every day of the week. All these decisions have one thing in common: they rest on an evidence base of economic statistics. And good decisions need good statistics to support them.
Economic statistics are an important public good. Governments typically recognise this by supporting their production, ideally through an independent agency. Such national statistical institutes are tasked with delivering a diverse range of timely and accurate statistics, usually according to internationally agreed – and therefore slowly moving – criteria. But producing reliable and meaningful statistics is by no means easy. If statistical bodies are to remain relevant in the face of an ever-changing economy, their statistics need to evolve too.
Five months ago, I was asked by the UK Chancellor of the Exchequer to assess the UK’s current and future statistical needs, as well as the capacity of the Office for National Statistics to meet them. I recently delivered an interim report (Bean 2015), with the final report due next March. While the focus of the report is on the UK, the challenges in maintaining quality economic statistics are similar across national statistical institutes, and a number of the recommendations are relevant to the provision of economic statistics elsewhere.
The pace of change in the modern economy is so rapid that measurement cannot easily keep up. Sometimes it feels as if we are trying to measure space travel with a ruler. Innovation and technological change, the wellspring of economic advancement, are constantly challenging the statistical frameworks used to measure key economic variables. The exponential growth of computing power and the digitisation of many activities have already radically altered the way people conduct their lives, both at work and at play. And yet another wave of disruptive technology is surely around the corner with recent advances in materials science, artificial intelligence and genetic engineering.
One example is GDP. Leaving public services to one side, GDP is estimated by focusing exclusively on activities carried out in the market economy, and assuming that market prices reflect values. A key feature of the provision of many internet-based services is that, although the fixed costs of development may be high, the marginal costs of their subsequent use is negligible. In such markets, if entry is unrestricted, prices are apt to be driven down to zero. That in turn has forced suppliers to finance their business by selling advertising space, which is not always priced commensurately with the value of the service to the consumer. Conventional approaches to the measurement of economic activity will not necessarily pick up this activity as it did with more traditional business models. More generally, the digital revolution has led to new business models where it is harder for the statistician to observe both transactions and a corresponding price.
Moreover, labour market data typically assumes a clean distinction between work and leisure. The digital economy and the internet have made these assumptions less tenable by blurring boundaries between work, domestic activity (also known as ‘home production’), and leisure. People are now able to provide themselves with information that would previously have been supplied through a market transaction. For example, today’s consumer books hotels and airlines directly, or else through an online portal, rather than requiring the services of a travel agent. Innovation has thus led to the disintermediation of information-intensive services.
The producers of economic statistics should be in the best position to lead the response to such challenges. National statistical institutes therefore ought to be at the forefront in understanding and explaining the limitations of their statistics and in exploring the scope for doing things differently. They should be constantly horizon-scanning, investigating emerging economic phenomena, and working closely with outside experts to conduct one-off studies of the quantitative impact of those phenomena on existing statistics. This may then prompt the development of experimental statistics capturing the new phenomena better, or else change the way existing statistics are collected or defined.
To ensure economic statistics remain relevant, they need to be based on good quality underlying data. One element of a forward-looking approach is to make better use of existing and new data sources. Falling response rates to traditional survey questionnaires present a common challenge and risk rendering statistical estimates unrepresentative. All countries therefore need to unlock the full potential of rich administrative micro datasets – information obtained for purposes other than the construction of statistics – currently held by both public and private organisations, in order to improve the timeliness, accuracy and efficiency of production of economic statistics. As I note in my interim report, the UK lags notably behind many other advanced economies in its use of public sector administrative data.
Greater use of these new data sources simultaneously holds out the prospect of more timely and accurate economic statistics, and a reduction in the reporting burdens on businesses and households. To give an example of the potential improvements in the sample, the Annual Business Survey (the UK’s main structural business survey in part used to measure value added) covers roughly 63,000 businesses but fails to capture any businesses falling under a minimum size threshold. By contrast, tax data held by the tax authorities would provide near-census information for over 1.8 million businesses. Canada began imbedding administrative data in the production of statistics in the 1990s and has been able to reduce the survey burden on businesses by 20% – saving businesses over CA$600,000 a year in compliance costs as a result.
Access to such micro data also allows economic statistics to be stratified finely and in many different ways, according to user needs. This potentially enables the construction of statistics with a fine breakdown by industry or by region. It is also of potential value when it comes to future-proofing economic statistics. It is in the nature of things that some industries and sectors rise in relative importance, while others decline. Access to near-census information from administrative data allows the warranted changes in the industrial classification scheme to be applied retrospectively more easily than would be the case if only surveys are available.
Obstacles to the use of administrative data therefore need to be removed. In some countries, including the UK, this includes relaxing restrictive legal frameworks in order to allow statistical producers unfettered access to the data for statistical purposes, while ensuring appropriate ethical safeguards are in place and privacy is protected. Statistical agencies also need to develop and maintain their data science capabilities to exploit these troves of data. Reaping the rewards of these new methods will take time, but in order to make progress statistical bodies need to do more than dip their toes in the water – they need to immerse themselves fully.
Addressing the challenge of measuring a modern dynamic economy not only requires statistical organisations to have the right skills, methods and technological systems – they also need to be pro-active and creative, curious and self-critical. Given the rapid change happening around them, to remain relevant they have to run fast just to stand still. But the benefits to decision makers – and to society more generally – from having a nimble statistical institute that is at the forefront of keeping its statistics relevant, timely and accurate, are significant.
Bean, C. (2015), “Independent review of UK economic statistics: interim report”, available here.