AdobeStock_300063358.jpeg
VoxEU Column Frontiers of economic research

Selection in surveys

Surveys are a crucial source of information for many important policy decisions. Yet, little is known about the extent to which different biases affect conclusions drawn from such data, and what we can do about them. Using survey data linked to administrative data, this column shows that a particular type of bias – nonresponse bias – can be large. The authors develop methods to detect and correct for nonresponse bias, which rely on simple changes to widely used survey designs.

When COVID-19 swept the world in early 2020 and economic activity collapsed, researchers and policymakers scrambled to determine the economic impact of the virus on everything from employment, income, and childcare to business closures, commercial property, and work-from-home strategies. Policymakers needed answers quickly to plan the most effective responses and direct limited resources to benefit those who needed them the most.

The problem was that there was little to no real-time data. It would be months before reliable statistics were available. Still, policymakers had to act. Rather than make policy in the dark, officials reached for surveys to cast at least some light on dimly understood phenomena. Economists and other researchers filled the data gap by conducting all manner of surveys about household and business activity.

Nonresponse bias in survey data is difficult to detect and counteract

How accurate are such surveys? What biases lurk within the contact lists from which researchers draw individuals who are asked to answer survey questions? Even if a contact list reflects the population, what about all those people who do not respond to a survey? What if the respondents and non-respondents are different in ways that matter for conclusions drawn from the survey data?

These are questions that apply to all surveys, not just COVID-related ones, and they are only growing in importance, as economists increasingly rely on survey data (see Figure 1 for trends over time). It is the last question – concerning the impact of nonresponse bias and what can be done to detect and correct for it – that has long vexed researchers.

Figure 1 Use of survey date in top-five publications

 

Notes: This figure shows how the collection and use of survey data have evolved since 1974. The use of survey data for economics research increased during the 1980s and early 1990s, before starting to decline in the mid-1990s. The increase happened in conjunction with a rise in the use of extensive, systematically collected household survey panels. Since 2010, the data show a renewed upward trend despite no change in the use of these household survey panels. This suggests that not only are economists using survey data more, but they have also turned to generating their own customized survey data.

A variety of methods have been developed to correct for nonresponse bias due to differences in observable characteristics (e.g. Little and Rubin 2019). For example, when the gender ratio among respondents skews more male than the general population, researchers frequently upweight female survey respondents’ answers. This approach is often limited by the availability of only a small set of observable characteristics, while other potentially important information remains unobserved in most data sets. In our recent paper (Dutz et al. 2021), we provide an example where most of the nonresponse bias is explained by differences in unobservable characteristics. When this occurs, widely used methods may fail to correct for such bias. Moreover, our research shows that popular reweighting methods intended to correct for selection on observables can in fact exacerbate nonresponse bias by amplifying unobservable differences.

Instead of trying to correct for nonresponse bias ex post, researchers often attempt to increase response rates to mitigate nonresponse bias. Widely used guidelines often assert that higher response rates are desirable because they indicate lower nonresponse bias. For example, the US Office of Management and Budget (2006: 60) asserts that “response rates are an important indicator of the potential for nonresponse bias” in its guidelines of minimum methodology requirements for federally funded projects. Similarly, the Abdul Latif Jameel Poverty Action Lab (J-PAL) publishes research guidelines that state “increasing response rates on a subsample and up-weighting the subsample will reduce bias” (J-PAL 2021), and that the “risk of bias [is] increasing with the attrition rate” (J-PAL 2020). However, our research indicates that attempts to increase response rates may in fact lead to more nonresponse bias.

We show that existing approaches to dealing with nonresponse may fail to correct for nonresponse bias. Motivated by these limitations, our research proposes and validates novel techniques to test and correct for nonresponse bias.

Governments use survey data to inform many important policy decisions

Why does this matter? Many are familiar with the Decennial Census conducted every ten years by the US Census Bureau, which determines congressional representation among states, among other key issues. Less well-known is that the Census Bureau also conducts more than 100 annual surveys of households and businesses.1 One of those surveys, the Household Pulse Survey, was developed in response to COVID-19 to collect and disseminate data “in near real-time to inform federal and state response and recovery planning”.2 Others, like the American Community Survey, are the country’s primary source of detailed population and housing data. Together, these surveys informed the distribution of more than $675 billion in funds during the fiscal year 2015, according to a 2017 Census analysis.3

Researchers are limited in their toolkit when it comes to nonresponse bias

With so much money on the line, it is important to get survey data right – or as right as possible. Researchers typically take care to address some issues by inviting a representative sample of individuals to participate in a survey. However, less attention is usually paid to considerations about who chooses to participate among the invited individuals. This makes nonresponse bias an overlooked danger. Our research includes a systematic review of the recent economics literature to document that researchers often (explicitly or implicitly) assume that nonresponse bias does not exist or, if it does, that it can be eliminated by re-weighting participants on observed demographics to bring them more in line with the population. 

These assumptions and conventional practices raise several questions. Does nonresponse bias affect the conclusions drawn from survey data? If so, what causes such biases to occur? Are these effects caused by observed or unobserved differences between participants and nonparticipants? Further, can surveys be designed differently to facilitate the detection and correction of these differences?

How important is nonresponse bias? An example from Norway

To shed some light on these and other questions, we employ the Norway in Corona Times (NCT) survey conducted by Norway’s national statistical agency. This survey was designed to study the immediate labour market consequences of the COVID-19 lockdown that began in March 2020. The survey has three features that make it attractive for analysing survey participation and nonresponse bias. First, the survey was conducted on a random sample from Norway’s adult population, thus eliminating non-representative sampling as a source of bias. Second, invited individuals were randomly assigned different financial incentives to participate. Third, invited individuals were linked to administrative data (from government agencies), thus allowing us to observe outcomes for all individuals, irrespective of their participation in the survey.

Together, these features enable us to quantify who participates in the survey, the magnitude of nonresponse bias, and the performance of methods intended to correct for such bias. Based on the data, we draw three broad conclusions about the presence of selective participation and nonresponse bias: 

  1. Labour market outcomes (recorded in the administrative data) for those who participated in the NCT survey are substantially different from those who did not participate. If these outcomes had been responses to survey questions (as they often are), there would have been a large nonresponse bias in the survey. Correcting for differences based on a rich set of observables would have done little to reduce this bias. 
  2. Attempts to mitigate nonresponse bias by increasing incentives for participation can backfire. Even though participation rates increase with incentives, nonresponse bias does too. 
  3. There are large differences across incentive groups in their responses to NCT survey questions that persist after adjusting for observables, consistent with the finding in the administrative data that differences between participants and nonparticipants are primarily due to unobservable factors. These differences are economically meaningful: the Norwegian government’s projected expenditure on unemployment insurance benefits, as a share of total expenditures on national insurance, ranges from 13.2% to 18.4% across incentive groups. These projections are off by 14–20% relative to projections based on the (retrospectively observed) true application rate for unemployment insurance.

Can surveys be designed differently to ease detection and correction of nonresponse bias?

For the purposes of this column, let’s sketch a simple picture to illustrate our methodology. Imagine that you conduct a survey with a sample that is randomly drawn from a country’s whole adult population. You randomly offer different levels of a financial incentive to participate – say, either $0, $5, or $10. At each level of incentive, there are certain people who will not respond because the incentive is not high enough. Higher incentives will encourage responses from people who would have otherwise not participated. Depending on whether these people make the pool of respondents more or less similar to the population, higher incentives may either reduce or increase nonresponse bias.

Further, there is another layer of nonresponse bias that researchers rarely consider, and that involves those nonparticipants who are never aware of the survey in the first place. They may, for example, never see the emails or answer their phone or otherwise remain unaware of the survey invitation. If this group of nonparticipants is large enough, it could mean that a key part of the population is missing when considering only participants.

We develop a novel model of survey participation that is unique in that it accounts for both forms of nonresponse bias – those who decline to participate and those who are unaware of the survey. Taking the model to the data requires that randomised incentives are embedded in the survey design. Note that randomising financial incentives does not necessarily make surveys more expensive to administer; rather, randomisation can take existing resources that would have been used anyway and apply them in a random fashion.

Using the linked survey and administrative data, we assess the performance of this model relative to other models used in or adapted from the existing literature. Our main conclusions are:

  • What matters for nonresponse bias is not participation rates, but who participates. Counter to common guidance on survey design, nonresponse bias may well increase with participation rates.
  • Some widely used reweighting methods intended to correct for selection on observables can exacerbate nonresponse bias by amplifying unobservable differences. 
  • Given these limitations of existing methods, researchers should consider implementing methods that can correct for nonresponse bias due to selection on unobservable characteristics. As we demonstrate, financial incentives can be used to test and correct for nonresponse bias due to unobserved differences between participants and nonparticipants. The key is randomisation in the assignment of incentives.
  • We show how to use randomised incentives combined with models of survey participation to correct for nonresponse bias. We propose a novel model that improves upon existing models by allowing for non-participation due to both not seeing the survey invitation and declining to participate, conditional on seeing the invitation. Our model performs well at correcting for nonresponse bias. The methods described in this paper show the way forward for researchers to design better surveys that test and account for unobservables, and to develop models that account for unobserved heterogeneity in all its forms.

Conclusion

Surveys are ubiquitous in academic research and key to many policy decisions, including decisions about the allocation of limited public resources. This research focuses attention on an often-marginalised issue – nonresponse bias – and shows that its dismissal may have important consequences.

We use the Norway in Corona Times survey, which randomly assigned participation incentives, to show how incentives are useful in detecting nonresponse bias in both linked administrative outcomes and in the survey responses. Importantly, we find that both sets of outcomes reveal large nonresponse bias, even after correcting for observable differences between participants and nonparticipants.

Finally, and importantly for further research, we offer methodological improvements that allow for unobservable differences to exist between participants and nonparticipants. Our model incorporates these and other enhancements and improves upon existing models by allowing for rich heterogeneity in individuals’ motivation for participating. As a result, our model hews closer to the data and offers results closer to the truth.

References

Dutz, D, I Huitfeldt, S Lacouture, M Mogstad, A Torgovitsky and W van Dijk (2021), “Selection in Surveys”, Working Paper No. 2021-141, Becker Friedman Institute, University of Chicago. 

J-PAL (2020), “Research resources: Data analysis”, https://www.povertyactionlab.org/.

J-PAL (2021), “Research resources: Increasing Response Rates of Mail Surveys and Mailings”, https://www.povertyactionlab.org/. 

Little, R J and D B Rubin (2019), Statistical Analysis with Missing Data, John Wiley & Sons.

Office of Management and Budget (2006), “Questions and answers when designing surveys for information collections”, Obama White House Archives. 

Endnotes

1 List of Surveys: census.gov/programs-surveys/surveyhelp/list-of-surveys.html.

2 Measuring Household Experiences During the Coronavirus Pandemic: census.gov/data/experimental-data-products/household-pulse-survey.html.

3 Uses of Census Bureau Data in Federal Funds Distribution: census.gov/library/working-papers/2017/decennial/census-data-federal-funds.html.

864 Reads