VoxEU Column Macroeconomic policy Monetary Policy

Errors as a source of macroeconomic frictions

Many models rely on the assumption of nominal price stickiness. But the different definitions of frictions can greatly alter their macroeconomic implications. In this column, price stickiness is modelled as the result of errors due to costly decision-making. Errors in the prices firms set help explain micro ‘puzzles’ relating to the sizes of price changes, the behaviour of adjustment hazards, and the variability of prices and costs. Errors in adjustment timing increase the real effects of monetary shocks, by reducing the ‘selection effect’.

Assumptions about price stickiness

Many models used for macroeconomic policy analysis today rely on an assumption of nominal price stickiness. But different formulations of nominal frictions can greatly alter their macroeconomic implications. For example, Calvo’s (1983) assumption of a constant price adjustment probability implies large real effects of monetary policy, while ‘menu cost’ models such as that of Golosov and Lucas (2007), which feature a ‘state-dependent’ adjustment hazard, imply that money is almost neutral.

Akerlof and Yellen (1985) argued that the main cause of nominal stickiness could simply be human error, since perfectly error-free decisions would be costly. In recent work (Costain and Nakov 2015a,b), we evaluate their argument quantitatively, in the light of retail pricing microdata. To model costly, error-prone choice we adopt the ‘control cost’ approach from game theory, which treats actions as random variables, and assumes that greater precision can be achieved by paying a higher decision cost. This captures the idea that a manager may devote more time and effort to a decision, considering more payoff-relevant factors, more thoroughly and carefully, to increase the probability of selecting the best possible action rather than some inferior alternative. Many possible cost functions could be assumed, but like Mattsson and Weibull (2002) and Matejka and MacKay (2015), we choose a specification which implies that decision probabilities take the familiar form of multinomial logits.

New approaches to model price stickiness

In one of our recent papers (“Precautionary price stickiness”), we ask how an otherwise-standard state-dependent pricing model behaves if it is based on a control cost function instead of menu costs (Costain and Nakov 2015a). The firm then faces a tradeoff whenever it resets its price. It could allocate sufficient time and effort to choose the frictionlessly optimal price with probability one, or it could choose a new price immediately without any thought, but then its reset price would be noisy, displaying large random errors. Optimal choice lies between these extremes, trading off marginal losses from errors against the marginal cost of achieving greater accuracy. Therefore, when the current price is sufficiently close to the target, the firm is better off not adjusting, to avoid decision costs and the risk of errors. Hence, our model generates an inaction band around the optimal price, just as a traditional fixed menu cost (FMC) model does. However, the presence of errors greatly improves the fit to microdata, in ways we will discuss shortly.

On the other hand, the presence of sharp inaction bands implies that the firm is perfectly precise in its timing choices. Its probability of price adjustment jumps from exactly zero to one precisely as it crosses out of the inaction region, which seems implausible. Therefore, in a second paper (“Logit price dynamics”), we impose control costs on the choice of when to adjust one’s price as well as the choice of what new price to set conditional on a change (Costain and Nakov 2015b). Under our assumed cost function, the adjustment hazard that best trades off the marginal risk of timing errors against the marginal decision cost of more accurate timing takes the form of a weighted binary logit.1

Both types of errors help the logit price dynamics model match the data, in distinct ways. Errors in which prices firms set help reproduce a variety of observations from microdata. But errors in when firms adjust their prices help fit macrodata by increasing the non-neutrality of monetary policy. One microeconomic finding we address is the coexistence of very small and large price adjustments (see Figure 1).2 In the fixed menu cost model, price changes occur just when the firm crosses the inaction bands, so the histogram of price adjustments consists of two sharp spikes. In contrast, with control costs, the price change distribution is smoothed out by errors. Small non-zero price changes occur in the logit price dynamics in two ways: a firm may erroneously decide to adjust its price when this is unnecessary, but then correctly choose a small price change; or it may correctly conclude that it needs to adjust its price, but subsequently err by calculating that the required change is small.3

Figure 1. Distribution of nonzero log price changes: Dominick’s data (from Midrigan, 2011) and models

Our model is also consistent with the observation that the adjustment hazard (after controlling for heterogeneity) remains roughly constant as time passes since the last adjustment, reminiscent of a Calvo model (see Figure 2). In contrast, the fixed menu cost model implies an increasing hazard, since the firm’s preferred price tends to deviate further from its nominal price the longer the latter stays fixed. Errors in timing inject randomness into adjustment decisions, flattening out systematic patterns in the hazard. Another reason our hazard is nearly flat is that when the firm mistakenly sets an inappropriate price, it has an incentive to reset it again soon thereafter, making the slope of the adjustment hazard ambiguous in sign.

Figure 2. Price adjustment hazard: Data from Nakamura and Steinsson (2008) and the 'logit price dynamics' model

Our two papers have similar implications for microdata, but differ substantially in their macro implications. The sharp inaction bands in 'precautionary price stickiness' imply a strong 'selection effect' like that found by Golosov and Lucas. After a surprise monetary expansion, the firms with the costliest price deviations reset immediately, generating a large inflation spike that eliminates most of the effect on real variables. In contrast, when timing errors are present, some firms requiring a large price change fail to adjust immediately, while some adjusters only make small changes; inflation kicks in more gradually, implying larger real effects. While our 'precautionary price stickiness' framework generates impulse responses almost indistinguishable from the Golosov/Lucas model under comparable calibrations, the logit price dynamics model implies a consumption response two-and-a-half times as large (but only one-third as large as that of the Calvo model), as Figure 3 shows.4 Furthermore, our control cost model can also address the impact of trend inflation on price change statistics – including the frequency of adjustment, the average size of adjustments, and the fraction of positive adjustments (see Figure 4).

Figure 3. Responses to a money growth shock: FMC, Calvo and 'logit price dynamics' models

Figure 4. Adjustment frequency, average absolute price change and percentage of price increases as a function of annual inflation: Gagnon (2009) Mexican data (stars) and 'logit price dynamics' model (solid line)

Readers will notice that our setup is closely related to ‘rational inattention’ in the sense of Sims (2003). One difference is that our approach focuses specifically on intermittent adjustment. We model each adjustment as a costly decision which can be avoided by leaving the control variable unchanged. A second difference is that the two frameworks focus on different ‘stages’ of the choice process: obtaining information, versus using that information to choose an action. Formally, Sims’ framework can be viewed as costly information extraction followed by a costless choice of an action, while ours can be viewed as a costly choice of an action when full information is freely available. Our model describes a situation where the firm has sufficient information to calculate its optimal price, but the required calculations are costly. It may increase precision by considering more variables, computing higher order terms, running checks, and so forth, thus requiring more decision time. Both approaches generate sluggish adjustment, but our setup is much easier to compute, as there is no need to keep track of firms’ priors when simulating the model.

Our analysis is one example of an extensive new literature that models retail pricing patterns in greater detail than Golosov and Lucas did. Other possible explanations of small non-zero price changes include paying a single menu cost to reset several prices (Midrigan 2011), fixed costs of obtaining information (Álvarez et al. 2014), or stochastic menu costs (Dotsey et al. 2013). A quantitative consensus seems to be emerging. All these models imply a lower correlation between the value of price adjustment and the probability of adjustment than fixed menu costs models do. Therefore, they have a weaker selection effect, with a degree of non-neutrality lying between the fixed menu cost framework and the Calvo model.

Advantages of the new approach

What is appealing about our own approach?

  • First, case studies (e.g. Zbaracki et al. 2004) suggest that managerial decisions are a large component of the costs of price setting, quantitatively in accord with our model.
  • Second, we allow for interior solutions. If the main cost of price adjustment is sticking a price tag on a product, a fixed cost specification seems reasonable. But if the main cost of price setting is obtaining or using information, then going to the corner solution of perfect precision is unlikely to be optimal.5
  • Third, errors cannot be assumed to cancel out when modelling rich microdata where individual choices are observed; they directly affect the first and higher moments of adjustments.
  • Fourth, by allowing for errors, we fit many micro and macro facts at least as well as other recent papers, while varying only two free parameters in our estimation. Recent experimental evidence also points to errors in adjustment timing (see Magnani et al. forthcoming).

Finally, our framework may have many applications beyond nominal price setting, wherever a decision-maker intermittently resets a control variable. We are currently working on modelling wages. Other applications might include bids and asks in financial markets or auctions, intermittent adjustment of physical capital, matching and separation processes, or intermittent updating of macroeconomic policy variables.

Authors’ note: Opinions expressed in this article are those of the authors. They should not be attributed to the Banco de España, the European Central Bank, the Eurosytem, or CEPR.

References

Akerlof, G and J Yellen (1985), “Can small deviations from rationality make a significant difference to economic equilibria?” American Economic Review 75 (4), 708-720.

Álvarez, F, F Lippi, and L Paciello (2014), “Monetary shocks in models with inattentive producers,” NBER Working Paper 20817.

Calvo, G (1983), “Staggered prices in a utility-maximizing framework,” Journal of Monetary Economics 12, 383-398.

Costain, J and A Nakov (2015a), “Precautionary price stickiness,” Journal of Economic Dynamics and Control 58, 218-234.

Costain, J and A Nakov (2015b), “Logit price dynamics,” CEPR Discussion Paper 10731.

Dotsey, M, R King, and A Wolman (2013), “Inflation and real activity with firm-level productivity shocks,” Federal Reserve Bank of Philadelphia Working Paper 13-35.

Gagnon, E (2009), “Price setting under low and high inflation: evidence from Mexico,” Quarterly Journal of Economics 124, 1221-1263.

Golosov, M and R Lucas (2007), “Menu costs and Phillips curves,” Journal of Political Economy 115 (2), 171-199.

MacKay, A and F Matejka (2015), “Rational inattention to discrete choices: a new foundation for the multinomial logit model,” American Economic Review 105 (1), 272-298.

Magnani, J, A Gorry, and R Oprea (2015), “Time- and state-dependence in an Ss decision experiment.” Forthcoming, American Economic Journal: Macroeconomics.

Mattsson, L-G and J Weibull (2002), “Probabilistic choice and procedurally bounded rationality,” Games and Economic Behavior 41, 68-71.

Midrigan, V (2011), “Menu costs, multiproduct firms, and aggregate fluctuations,” Econometrica 79 (4), 1139-1180.

Nakamura, E and J Steinsson (2008), “Five facts about prices: a reevaluation of menu cost models,” Quarterly Journal of Economics 123 (4), 1415-1464.

Sims, C (2003), “Implications of rational inattention,” Journal of Monetary Economics 50, 665-690.

Van Damme, E (1991), Stability and Perfection of Nash Equilibrium, 2nd edition. Springer Verlag.

Woodford, M (2009), “Information-constrained state-dependent pricing,” Journal of Monetary Economics 56, S100-S124.

Zbaracki, M J, M Ritson, D Levy, S Dutta, and M Bergen (2004), “Managerial and customer costs of price adjustment,” Review of Economics and Statistics 86 (2), 514-533.

Footnotes

1 Woodford (2009) also derives an adjustment hazard in the form of a weighted binary logit.

2 The panels compare histograms of nonzero log price adjustments from comparable calibrations of the fixed cost model, the Calvo model, and our logit price dynamics model, together with an analogous histogram based on US retail price data.

3 Note that when a firm in the logit price dynamics model makes a small price change, it does not know which of these two types of errors has occurred. In a menu cost model, if the desired adjustment is sufficiently small, then the firm just keeps its price fixed. But in the logit price setup, once the firm has sunk the cost of calculating its desired adjustment, it prefers to go ahead with the change, even if it is small.

4 Here we take the area under the consumption impulse response as the measure of the real effects of the shock.

5 In contrast, Álvarez et al. (2014), assume firms pay a fixed cost to obtain perfect information.

4,724 Reads