DP11914 Myopia and Discounting
We assume that perfectly patient agents estimate the value of future events by generating noisy, unbiased simulations and combining those signals with priors to form posteriors. These posterior expectations exhibit as-if discounting: agents make choices as if they were maximizing a stream of known utils weighted by a discount function, D(t). This as-if discount function reflects the fact that estimated utils are a combination of signals and priors, so average expectations are optimally shaded toward the mean of the prior distribution, generating behavior that partially mimics the properties of classical time preferences. When the simulation noise has variance that is linear in the event's horizon, the as-if discount function is hyperbolic,
D(t)=1/(1+at). Our agents exhibit systematic preference reversals, but have no taste for commitment because they suffer from imperfect foresight, which is not a self-control problem. In our framework, agents that are more skilled at forecasting (e.g., those with more intelligence) exhibit less discounting. Agents with more domain-relevant experience exhibit less discounting. Older agents exhibit less discounting (except those with cognitive decline). Agents who are encouraged to spend more time thinking about an intertemporal tradeoff exhibit less discounting. Agents who are unable to think carefully about an intertemporal tradeoff -- e.g., due to cognitive load -- exhibit more discounting. In our framework, patience is highly unstable, fluctuating with the accuracy of forecasting.