Crystal ball in front of financial charts
VoxEU Column Frontiers of economic research Financial Markets Macroeconomic policy

Lumpy forecasts: Rational inaction in professional forecasting

Forecasts from professionals (economists, analysts, brokers, academics) are a key input into economic decision-making. This column highlights that professional forecasts are ‘lumpy’, often remaining unchanged for several periods, before shifting in large, infrequent jumps. It argues that this reflects ‘rational inaction’, as frequent adjustments or constant swings could undermine credibility. It suggests a natural distinction between forecasts reported in surveys and the internally held beliefs by forecasters. Finally, it proposes a simple two-stage procedure for recovering more accurate measures of underlying beliefs.

Forecasts from professionals – economists, analysts, brokers, and academics – are a key input into economic decision-making. Businesses use them to plan investment and pricing strategies, while governments and central banks rely on them to design fiscal and monetary policies.

Since the influential work of Coibion and Gorodnichenko (2012), survey data from professional forecasters have become central to testing theories of expectation formation, informational frictions, and biases. Existing research highlights various frictions and behavioural biases, including inattentiveness (Andrade and Le Bihan 2013), diagnostic expectations (Bordalo et al. 2022), and overconfidence in private information (Broer and Kohlhas 2024, Adam et al. 2024). Strategic incentives — how forecasters' reputations influence their reported forecasts — have also garnered considerable attention (Ottaviani and Sørensen 2006, Gemmi and Valchev 2023).

Our recent paper (Baley and Turen 2025) highlights a striking yet underexplored aspect of professional forecasts. These forecasts often remain unchanged for several periods, only to shift abruptly in large, infrequent jumps. We argue that these ‘lumpy forecasts’ do not indicate irrationality or bias; rather, they reflect rational inaction. Forecasters value stability in their published predictions and aim to avoid appearing erratic to their clients or the public. Forecasts might seem overly inertial not because they fail to process information – ultimately, they are professionals – but because frequent adjustments or constant swings could undermine credibility. Our work suggests a natural distinction between forecasts (what is reported in surveys) and beliefs (what forecasters hold internally).

This preference for stability leads forecasters to restrain themselves before revising their predictions until enough evidence accumulates to justify an update. In practice, this results in large, occasional revisions  (rational ‘catch-ups’) that may appear to be an overreaction to information. Strategic concerns further amplify this tendency: forecasters seek stability and alignment with the consensus (the average forecasts), avoiding reputational risks from being outliers. These two forces – stability and alignment with others – generate and magnify the observed lumpiness and overreaction.

To better understand these frictions, we explore the heterogeneity among forecasting institutions by comparing banks, financial institutions, consulting firms, and universities. Each group faces distinct costs and reputational incentives, allowing us to identify each friction's role more clearly. Finally, we propose a straightforward two-stage procedure to ‘cleanse’ survey data – isolating informative revisions and correcting for strategic alignment – to provide a more accurate proxy of forecasters’ actual beliefs.

Documenting lumpy forecasts

Using detailed US inflation forecast data from Bloomberg's Economic Forecast (ECFC) panel and a fixed-event forecasting framework, we monitor the progression of forecasts about a fixed target (end-of-year inflation) across forecasting periods. Our study reveals several patterns:

  • Infrequent updating. Although forecasters receive new information monthly, they only update their end-of-year inflation forecasts four to six times a year. Figure 1 illustrates the probability of a non-zero revision across monthly forecast horizons. On average, 60% of forecasts remain unchanged in any given month. This inertia increases as the forecast horizon shortens: early in the year, roughly half of the forecasts are revised, but by the final months, that share drops to around 30%.

Figure 1 Average probability of a non-zero forecast revision

Figure 1 Average probability of a non-zero forecast revision

  • Consensus-driven revisions. Forecast revisions are influenced by their distance from the consensus (average) forecast. Figure 2 illustrates the forecaster’s distance from the consensus (the ‘consensus gap’) on the horizontal axis, while the vertical axis displays the relative probability of revisions. Upward revisions (red line) are more likely when the forecaster is below the consensus and are less likely when above it; downward revisions (magenta line) are more probable when above the consensus and are suppressed when below it.

Figure 2 Gap to the consensus triggers revisions

Figure 2 Gap to the consensus triggers revisions

  • Overshooting revisions. Forecast revisions are often larger than can be justified by recent information alone. Figure 4 illustrates the ordinary least squares (OLS) coefficient from regressing forecast errors on forecast revisions, using all revisions (red line). The significant negative coefficients (averaging −0.76 across horizons) indicate overreaction. Upward revisions lead to overpredictions, while downward revisions result in underpredictions.

Rational inaction: A structural model of lumpy forecast

To explain these empirical patterns, we develop a structural Ss model analogous to those used in price setting (with menu costs) or investment (with capital adjustment costs) but specifically tailored to forecast revisions. In our framework, forecasters continuously update their internal beliefs through Bayesian learning from public and private signals. However, two key frictions shape how and when these beliefs are reflected in reported forecasts:

  • Fixed revision costs discourage frequent updates, as forecasters prefer maintaining stability in their reported projections.
  • Strategic concerns reflect a desire to remain close to the consensus.

The model, calibrated to align with the frequency and size of revisions in the Bloomberg data, replicates the observed irregularities, consensus-seeking, and apparent overreaction in professional forecasts. Figure 3 illustrates a simulated year with 100 forecasters (in light grey), their average (blue), and their average internal belief (green). While reported forecasts adjust infrequently, internal beliefs respond more smoothly and closely track the actual inflation target (dashed line). This highlights the mismatch between what forecasters believe and what they report—a central insight of the model.

To address the challenges of solving the equilibrium with heterogeneity, strategic concerns, and aggregate shocks, we employ a Restricted Perceptions Equilibrium (RPE). This approach draws on the work of Marcet and Nicolini (2003) and Adam and Marcet (2011), and more recently advocated by Moll (2024). RPE enables us to capture rich dynamics in a way that would be computationally infeasible under full rational expectations.

Figure 3 Simulated beliefs and forecasts in the model

Figure 3 Simulated beliefs and forecasts in the model

Forecast frictions: Insights from cross-sectional heterogeneity

We examine differences among types of institutions to better understand the sources of lumpiness, strategic concerns, and heterogeneity in forecasting behaviour. We categorise Bloomberg’s forecasters into four groups – financial institutions, banks, consulting companies, and universities – and estimate frictions for each type. Table 1 presents these estimates, normalised relative to financial institutions, which serve as the benchmark group.

Table 1 Estimated frictions by forecaster type (relative to financial institutions)

Table 1 Estimated frictions by forecaster type

Financial institutions and consultancies exhibit the lowest revision costs and the strongest strategic concerns, likely reflecting a desire to stay relevant or to avoid alienating clients by deviating too far from their peers. In contrast, universities show the highest revision costs, weakest strategic concerns, and most considerable signal noise, consistent with institutional preferences for stability, independence, and more diverse internal views.

These frictions likely extend to households (D’Acunto et al. 2024) and firms (Thwaites et al. 2022). Strategic concerns are likely less significant for these groups, while revision costs may stem from inattention, cognitive constraints, or personal experience (Malmendier et al. 2022). Recent evidence also highlights substantial heterogeneity in how agents process information (Meeks and Monti 2024). Understanding how frictions differ across economic agents remains an open question.

Policy implications: Refining measurement of expectations

Our findings carry important implications for policymakers who rely on survey-based forecasts. As discussed, reported forecasts are shaped not only by information but also by strategic and frictional distortions. We propose a simple two-stage procedure for recovering more accurate measures of underlying beliefs:

  1. Isolate active revisions: focus solely on periods with explicit forecast updates, discarding unchanged predictions.
  2. Correct for strategic bias: use linear regression to cleanse for the forecasters’ tendency to align with the consensus.

This approach significantly reduces the perception of overreaction in the data and improves the interpretation of expectations to inform policy. Figure 4 presents the estimated OLS coefficients from regressing forecast errors on forecast revisions, using all forecasts (red line), only updaters (black dashed line), and correcting for lumpy behaviour and strategic bias (pink line), progressively bringing the coefficient closer to zero. This shows that the observed overreaction is not entirely behavioural but amplified by rational inaction.

Figure 4 Estimated coefficient of forecast errors on forecast revisions

Figure 4 Estimated coefficient of forecast errors on forecast revisions

Conclusion

What may appear to be overreaction or inertia in forecast data often reflects rational responses to frictions such as adjustment costs and reputational concerns. Recognising this ‘lumpiness’ aids in better interpreting survey forecasts. By separating reported forecasts from underlying beliefs and enhancing survey design and incentives (Gaglianone et al. 2022), we argue that it can sharpen our understanding of expectations – and strengthen the foundations of macroeconomic policy.

References

Adam, K and A Marcet (2011), “Internal rationality, imperfect market knowledge, and asset prices”, Journal of Economic Theory 146(3): 1224–1252.

Adam, K, P Kuang and S Xie (2024), “Overconfidence in Private Information Explains Biases in Professional Forecasts”, Mannheim University mimeo.

Andrade, P and H Le Bihan (2013), “Inattentive professional forecasters”, Journal of Monetary Economics 60(8): 967–982.

Baley, I and J Turen (2025), “Lumpy Forecasts”, CEPR Discussion Paper DP19824.

Bordalo, P, N Gennaioli, Y Ma and A Shleifer (2020), “Overreaction in Macroeconomic Expectations”, American Economic Review 110(9): 2748–82.

Bordalo, P, N Gennaioli and A Shleifer (2022), “Overreaction and Diagnostic Expectations in Macroeconomics”, Journal of Economic Perspectives 36(3): 223-244.

Broer, T and A Kohlhas (2024), “Forecaster (mis-)behavior”, Review of Economics and Statistics 106(5): 1334–1351.

Coibion, O and Y Gorodnichenko (2012), “What Can Survey Forecasts Tell Us about Information Rigidities?”, Journal of Political Economy 120(1): 116–159.

D’Acunto, F, E Charalambakis, D Georgarakos, G Kenny, J Meyer and M Weber (2024), “Household inflation expectations: Taking stock of the recent research insights for monetary policy”, VoxEU.org, 1 August.

Gaglianone, W P, R Giacomini, J V Issler and V Skreta (2022), “Incentive-driven inattention”, Journal of Econometrics 231(1): 188–212.

Gemmi, L and R Valchev (2023), “Biased Surveys”, NBER Working Paper No. 31607.

Malmendier, U M, M Weber and F D’Acunto (2022), “Learning about inflation expectations from the data”, VoxEU.org, 7 May.

Marcet, A and J P Nicolini (2003), “Recurrent Hyperinflations and Learning”, American Economic Review 93(5): 1476–1498.

Meeks, R and F Monti (2024), “Inflation expectations: Making all the information count”, VoxEU.org, 2 March.

Moll, B (2024), “The Trouble with Rational Expectations in Heterogeneous Agent Models: A Challenge for Macroeconomics”, CEPR Discussion Paper 19731.

Ottaviani, M and P N Sørensen (2006), “The strategy of professional forecasting”, Journal of Financial Economics 81(2): 441–466.

Thwaites, G, I Yotzov, O Ozturk, P Mizen, P Bunn, N Bloom and L Anayi (2022), “Firm inflation expectations in quantitative and text data”, VoxEU.org, 8 December.