VoxEU Column Frontiers of economic research

‘Truth serums’: Measuring people’s expectations in economic surveys

Expectations about uncertain events play an important role in both science and policy, so it is important to be able to elicit people’s expectations in surveys. This column discusses the use of more complex approaches to revelation of expectations than the widely used ‘just ask’ approach. Though complex methods have clear benefits, the costs are not negligible and should be taken into account.

Consumers’ expectations about economic variables are an important input in explaining and predicting their behaviour in a broad set of economic and financial decisions. Macroeconomic policy requires knowledge of expectations about uncertain future income and inflation to predict saving and investment decisions (e.g. Guiso et al. 1992, Guiso and Parigi 1999). Development economists have studied people’s beliefs to understand the adoption of new varieties of seed or the decision to settle in natural disaster prone areas (e.g. Cameron and Shah 2011, Delavande et al. 2011b). Health economists have investigated whether inaccurate beliefs can explain risky health behaviours such as smoking or shunning of preventive care (e.g. Khwaja et al. 2006, Carman and Kooreman 2014).

Expectations about uncertain events play an important role not only in consumer choice. They are measured in many other fields in social science, including accounting, education, finance, medicine, meteorology, and politics. In these studies, respondents are typically asked to report their own subjective likelihood assessments about the uncertain event of interest which, in turn, can be used to inform policy.

Given their widespread use in academics and policy, there is an ongoing debate on how to elicit consumers’ expectations in survey studies. The most widely used approach is to ‘just ask’; the analyst presents respondents with a future scenario and asks them for their likelihood judgment that the event materialises in the future. Much research in this context has circled around the question of how to present events, outcomes, and likelihoods to possibly poorly educated respondents (Delavande et al. 2011a), so as to allow people to give their unbiased assessments of the future events under consideration. However, a sceptical commentator might argue that basing policy on data on these expectations is questionable because respondents lack any incentives to reveal their expectations truthfully. They may be tempted to misrepresent their expectations because of social desirability or to influence what they think is the goal of the survey.  

Incentivising respondents to reveal their expectations

To provide respondents with incentives for truthful revelation of their expectations, several choice-based mechanisms with monetary rewards for accurate predictions have been developed (e.g. Schlag et al. 2015, Trautmann and van de Kuilen 2015). A simple example of such a mechanism would be the following. We may offer a person the choice between the following two bets to assess her expectations regarding the future development of the stock market: receive £100 if the FTSE 100 closes higher than 6000 by the end of the month, or receive £100 if FTSE 100 closes lower than 6000 by the end of the month. The choice between these two bets will only depend on the respondent’s beliefs regarding the development of the FTSE 100. In particular, if the respondent believes that the probability of the FTSE 100 closing higher than 6000 is more likely than it closing lower than 6000, he will opt for the first bet. Consequently, the analyst can deduce that the subjective probability of the FTSE 100 closing higher than 6000 for this participant is larger than 50%. By offering more complex choices among bets, more fine-grained expectations about economic variables can be elicited.

By linking monetary payoffs with the accuracy of the reported expectations, these choice-based methods work very much like a ‘truth serum’. It is in the best interest of respondents to reveal their expectations truthfully. This benefit though, comes at a price to the researchers. Potential drawbacks of these truth serums are that they are highly complex for respondents, and that they often make assumptions about respondents’ preferences and attitudes that may not hold true in practice. The latter aspect requires even more complex designs that may be hard to follow and implement in surveys and in the field (e.g. Offermann et al. 2009, Hossain and Okui 2013). To make an informed choice among the potential elicitation methods (including the ‘just ask’ alternative), researchers need to understand how the costs of each method measure up to the potential benefits.

To weigh the costs (in terms of complexity, survey length, etc.) and benefits (in terms of high quality expectations data) of different elicitation methods, in a recent paper (Trautmann and van de Kuilen 2015) we compare the performance of several belief elicitation methods in a controlled laboratory experiment, using a sample of 206 university students. The key aspects of our design are that:

  • Participants have to make judgments about uncertain events with unknown (to them) likelihoods;
  • We readily observe the underlying events and can reward participants on the basis of the choices; and
  • We can vary the elicitation method.

We observe three dimensions that can be seen as indicators of the quality of the elicited probability. First, do the elicited subjective probabilities match the true likelihood of the event? Second, do the elicited probabilities of mutually exclusive events add up to 100%? Third, do participants’ beliefs predict their behaviour in decisions for which these belief matter? 

  • Comparing the un-incentivised ‘just ask’ approach to the truth serums, the results show that respondents are more inclined to base their decisions on reported expectations when they are rewarded for the accuracy of these expectations.

There were no substantial differences between different truth serums though, which suggests that the least complex of these methods score best in terms of cost-benefit efficiency. However, in contrast to the unincentivised approach, truth serums lead to probabilities of mutually exclusive events that add up to significantly more than 100%. That is, the choice-based approach may thus make this desirable property of subjective probability more opaque. Finally, all methods show a strong degree of conservatism in probability estimates. Estimates of small probabilities are biased upward, and large probabilities are biased downward, in the direction of a 50% belief.  

Conclusion  

On balance, there are benefits of using complex methods, but these have to be weighed against the costs of implementing a method in a specific research setting (type of respondent, online versus in person interviews, etc.). The cost may vary substantially across different survey instruments and populations. When no systematic biases of misrepresentation are expected, simple ‘just ask’ may just be ok.

References

Cameron, L, and M Shah (2011), ‘Risk-Taking in the Wake of Natural Disasters’, Working paper, Monash.

Carman, K and P Kooreman (2014), ‘Flu Shots, Mammograms, and the Perception of Probabilities’, Journal of Risk and Uncertainty, vol. 49, pp. 43-71

Delavande, A, X Gine, and D McKenzie (2011a), ‘Eliciting Probabilistic Expectations with Visual Aids in Developing Countries: How Sensitive are Answers to Variations in Elicitation Design?’, Journal of Applied Econometrics, vol. 26, pp. 479–497.

Delavande, A, X Gine, and D McKenzie (2011b), ‘Measuring subjective expectations in developing countries: A critical review and new evidence’, Journal of Development Economics, vol. 94, pp. 151–163.

Guiso, L, T Jappelli, and D Terlizzese (1992), ‘Earnings Uncertainty and Precautionary Saving’, Journal of Monetary Economics, vol. 30, pp. 307–337.

Guiso, L, and G Parigi (1999), ‘Investment and Demand Uncertainty’, Quarterly Journal of Economics, Vol. 114, pp. 185-227.

Khwaja, A, F Sloan, and M Salm (2006), ‘Evidence on Preferences and Subjective Beliefs of Risk Takers: The Case of Smokers’, International Journal of Industrial Organization, vol. 24, pp. 667–682.

Offerman, T, J Sonnemans, G van de Kuilen, and P P Wakker (2009), ‘A Truth-Serum for Non-Bayesians: Correcting Proper Scoring Rules for Risk Attitudes’, Review of Economic Studies, vol. 76, pp. 1461–1489.

Hossain, T and R Okui (2013), ‘The Binarized Scoring Rule’, Review of Economic Studies, vol. 80, pp. 984–1001.

Schlag, K, J Tremewan, and J van der Weele (2015), ‘A penny for your thoughts: a survey of methods for eliciting beliefs’, Experimental Economics, vol. 18, pp. 457-490.

Trautmann, S T and G van de Kuilen (2015), ‘Belief Elicitation: A Horse Race among Truth Serums.’ Economic Journal, vol. 125, pp. 2116-2135.

1,050 Reads