ESSIM 2018 - Summaries

European Summer Symposium in International Macroeconomics (ESSIM) 2018

Selected Paper Summaries

ESSIM, which was hosted by Norges Bank in Oslo last week, is one of CEPR’s flagship annual symposia currently in its 26th year. The meeting brings together about 75 economists from across Europe and key researchers from outside the region. It provides a unique opportunity for macroeconomists from different research institutions and countries to discuss research in a relaxed atmosphere and to develop long-term collaborative relationships.

To extend the reach of the research presented at ESSIM to CEPR members, summaries of 13 key papers presented this year are provided below. The research presented at ESSIM is often early stage and work in progress. As such, these summaries provide an insight into the very latest debates taking place in the macroeconomic community and give members unique access to the current, high quality research that is being conducted by CEPR researchers.


Plenary Sessions


Monetary Economics and Fluctuations


International Macroeconomics and Finance


Macroeconomics and Growth


Plenary sessions

Lags, Costs, and Shocks: An Equilibrium Model of the Oil Industry

Gideon Bornstein, Northwestern University
Per Krusell, Institute for International Economic Studies, Stockholm University, CEPR
Sergio Rebelo, Northwestern University and CEPR

With the emergence of large-scale fracking, the characteristics of oil production have changed. The authors use a micro data to estimate a stochastic industry-equilibrium model of the oil industry, as a first step towards studying how these changes can be integrated in a general-equilibrium model of the world economy.
 
Oil shocks are often blamed for the poor performance of many countries in the 1970s but have a relatively minor role in leading macroeconomic models, because oil represents a relatively small share of overall production costs. But Gabaix (2011), Acemoglu et al. (2012), and Baqaee and Farhi (2017) argue that shocks to sectors with a small factor share that are highly complementary to other inputs can have a large impact on aggregate output.
 
In their model, the authors find two sources of oil producer heterogeneity that are particularly important:
 

  • The different behaviour of firms that are part of the Organization of the Petroleum Exporting Countries (OPEC) and those that are not.
  • The difference between fracking and conventional oil production. It is less costly for fracking firms to adjust their level of production in the short run, so these firms are more responsive to changes in prices. Also, the lag between investment and production in much shorter in fracking operations than in conventional oil production.

The research

The paper builds a stochastic industry-equilibrium model of the oil industry using proprietary data compiled by Rystad Energy. The data includes information on reserves, production, investment, and operational costs for roughly 14,000 oil fields operated by 3,200 companies. They focus our analysis on the period from 1970 to 2015 uses micro data to shed new light on key aspects of the oil industry (Kellogg 2014, Arezki et al. 2016, and Anderson et al. 2017).

The model assumes shocks to the demand for oil in the world economy as well as supply disruptions are exogenous. It derives quantity and price outcomes as a function of these shocks. The benchmark model features heterogeneity between OPEC and non-OPEC firms. The authors also introduce a conventional oil--fracking dimension.  

The available micro data was used three ways:

  • A set of facts about oil markets.
  • Micro estimates of the average lag between investment and production and the elasticity of extraction costs with respect to production.
  • Using generalized method of moments to estimate the remaining model parameters, targeting a set of second moments for oil-related variables.

Conclusion

Fracking has led to a large decline in the volatility of oil prices, because firms are more nimble in adjusting production levels from existing fields and in starting production in new fields, so they can respond more quickly to price increases. They also find that supply and demand shocks contribute equally to the volatility of oil prices, but that investment in the oil industry is driven mostly by demand shocks.

References

Acemoglu, D, V M Carvalho, A Ozdaglar, and A Tahbaz-Salehi (2012), "The network origins of aggregate fluctuations", Econometrica 80: 1977–2016.
Anderson, S T and R Kellogg, and S W Salant (2018), "Hotelling under pressure", Journal of Political Economy, forthcoming.
Arezki, R and V A Ramey, and L Sheng (2016), "News shocks in open economies: Evidence from giant oil discoveries", The Quarterly Journal of Economics 132(1): 103–155.
Baqaee, D R and E Farhi (2017), "The macroeconomic impact of microeconomic shocks: Beyond Hulten’s Theorem", Technical report, National Bureau of Economic Research.
Gabaix, X (2011), "The granular origins of aggregate fluctuations", Econometrica 79(3): 733–772.
Kellogg, R (2014), "The effect of uncertainty on investment: evidence from texas oil drilling", The American Economic Review 104(6): 1698–1734.


The Spillover Effects of Top Income Inequality

Jeffrey Clemens, University of California, San Diego
Joshua D Gottlieb, University of British Columbia
David Hémous, University of Zurich and CEPR
Morten Olsen, IESE, University of Zürich

The share of total earnings going to the top of the income distribution has increased. This has occurred both in the general population, and also within specific occupations. The overall growth of top income inequality is not simply due to the divergence between bankers and lawyers, or between programmers and physicians (Bakija et al. 2012).
 
Therefore, any plausible explanation for rising inequality would have to apply to occupations as diverse as financial managers, doctors, and CEOs. The authors test the hypothesis that exogenous increases in income inequality in a local market within one occupation “spill over” into others through the former’s consumption.
 
The authors argue that, if the hypothesis is true, they should observe spillovers to occupations which work in local markets, and service the public directly, where there is:

  • heterogeneous ability, as perceived by those consuming the service they offer
  • where production is not scalable—there is no mechanism that would allow the more talented to scale up output, following Rosen (1981)—and
  • consumption is non-divisible

This should apply to doctors, but also occupations with the same characteristics: dentists, and real estate agents. But it should not apply to occupations which do not have these characteristics, for example brewers of beer (divisible), college professors (not in local markets) or secretaries (do not service the public).

The research

The model in the paper predicts that:

  1.  High-earning patients are treated by more expensive doctors.
  2. An increase in general inequality will lead to an increase in inequality for doctors if they service the general population directly and their services are non-divisible.
  3. This is true regardless of whether doctors can move across regions, and regardless of whether doctors’ ability is positively correlated with the income they would receive in alternative occupations.
  4. If patients can travel easily, doctors’ income in each region does not depend on local income inequality.

The authors use 2008 healthcare spending using data from the Medical Expenditure Panel Survey, a nationally representative survey of families’ health insurance coverage and medical spending. It includes 31,262 individuals in 12,316 families. Insurance claim data from Blue Cross/Blue Shield of Texas, the Colorado All-Payer Claims Data, and All-Payer Claims Data from New Hampshire identifies which doctors were paid for services, by which people. A 10% increase in family income is associated with 2.3% more medical spending. Census data is used to compute local inequality.
 
To establish a causal link for the spillovers, they use a Bartik (1991)-style instrument, a weighted average of nationwide inequality for the 20 occupations that are the most represented in the top 10% nationwide is used to control for positive correlation between general income inequality, and income inequality for a specific occupation.

Conclusion

They find that an increase in general income inequality causes an increase in inequality for these occupations, with a spillover elasticity ranging from 0.5 to 2.7. The parameter estimates suggest that the majority of the increase in income inequality for these occupations can be explained by increases in general income inequality. Financial managers and college professors experience no spillover effects.

References

Bakija, J, A Cole, and B Heim (2010), “Jobs and Income Growth of Top Earners and the Causes of Changing Income Inequality: Evidence from U.S. Tax Return Data", unpublished manuscript, Williams College.
Bartik, T J, (1991), Boon or Boondoggle? The Debate Over State and Local Economic Development Policies, Upjohn Institute for Employment Research.
Rosen, S (1981), “The Economics of Superstars”, American Economic Review 71: 485–858.


Monetary Economics and Fluctuations

Mismatch Cycles

Isaac Baley, Universitat Pompeu Fabra and Barcelona GSE
Ana Figueiredo, Universitat Pompeu Fabra and Barcelona GSE
Robert Ulbricht, Toulouse School of Economics

The authors ask how business cycles affect the allocation of workers to jobs. Do workers end up more mismatched when jobs are scarce, or is it the opposite?
 
There are two schools of thought for how downturns in the business cycle might affect matching:

  • A procyclical cleansing effect: This is based on Mortensen and Pissarides (1994), which predicts that mismatch is procyclical. In downturns, their model suggests that reservation match quality increases, because low quality matches are destroyed while only high quality matches are formed.
  • A countercyclical sullying effect: On the other hand, the matching model in Barlevy (2002), allows allows for on-the-job search, and suggests a countercyclical mismatch. In recessions, workers in ongoing job relationships reallocate to better matches (climb the ladder) more slowly. Also, Moscarini (2001) suggests that unemployed jobseekers accept less desirable jobs due to higher competition among them, which increases mismatch.

If both effects are present, then it is necessary to disentangle them and to find which effect is dominant. But previous studies lack a direct measure of mismatch. Instead they use indirect measures such as job duration and wages.
 
The authors use a direct measure of mismatch (a 'mismatch index') developed by Guvenen et al. (2015), and find evidence from it that average mismatch in the US economy is procyclical. This suggests that, in recessions, workers skills are more aligned with job requirements; whereas in expansions mismatch increases.

The research

The research employs worker-level panel from the 1979 National Longitudinal Study of Youth (NLSY79) between 1979 and 2012. This mismatch index they employ is defined as the difference between a worker’s abilities in different skills and how intensive these skills required by a job. A larger difference suggests a lower-quality match. Therefore, to create the index, the authors combine data on workers’ employment spells, occupations and skills with occupational-level data from O*NET.

This methods has the advantage that it can tease out evidence for both theories. It allows the authors to separately text the impact of business cycle conditions on the mismatch of existing job relationships from their effect on the mismatch of newly formed relationships. This helps them disentangle three separate effects of a downturn:

  • Cleansing for job stayers: Mismatches decrease in recessions, driven by workers being less under-qualified.
  • Sullying for new hires from unemployment: Mismatch increases in this group, supported by an increase in both over- and under-qualification.
  • No effect for new hires from employment.

In aggregate, the cleansing effect dominates.
 
They also find an effect on job duration. Adding the level of mismatch as a control, they investigate the hazard rate of job separation during a recession. Unsurprisingly, mismatch has a positive effect in the hazard rate of separation, and that this effect increases with the contemporaneous level of unemployment. More surprising, matches that start in recessions have a shorter duration, even controlling for mismatch.
 
The authors build a model the reconcile these insights, based on Jovanovic (1979) and Moscarini (2005). It is a model of learning about worker-firm mismatch augmented with fixed adjustment costs (it is costly to break and create relationships) and aggregate shocks. Workers and firms learn more about the quality of their match over time, and smaller mismatch is tolerated. Following a negative productivity shock, mismatch becomes less tolerable; this destroys worker-firm matches with high levels of perceived mismatch. Newer relationships are more likely to separate, due to more uncertainty.
 
If we assume that a worker only learns about her abilities in the skills required by an occupation, when a worker switches occupations she starts learning about new skills. Therefore countercyclical uncertainty is explained because a higher fraction of unemployed workers switch occupations in recessions.

Conclusion

In the model, countercyclical information friction reconciles the fact that in recessions, jobs with high mismatch (but low uncertainty) are destroyed, while matches with high mismatch and high uncertainty are created. Newer (more uncertain) relationships are more likely to separate. In a recession, firms and workers have higher uncertainty about the mismatch level, this explains why a match is more likely to end when the unemployment rate at the start of the job is higher.

References

Barlevy, G (2002), "The sullying effect of recessions", Review of Economic Studies 69(1): 65–96.
Guvenen, F, B Kuruscu, S Tanaka, S and D Wiczer (2015), Multidimensional skill mismatch, Federal Reserve Bank of St. Louis working paper 2015-022A.
Jovanovic, B (1979), "Job matching and the theory of turnover", Journal of Political Economy 87(5): 972–990.
Mortensen, D and C Pissarides (1994), "Job creation and job destruction in the theory of unemployment", Review of Economic Studies 61(3): 397–415.
Moscarini, G (2001), "Excess worker reallocation", Review of Economic Studies 68: 593–612.
Moscarini, G (2005), "Job matching and the wage distribution", Econometrica 73(2): 481–516.


Deconstructing Monetary Policy Surprises – The Role of Information Shocks

Marek Jarocinski, European Central Bank
Peter Karadi, European Central Bank and CEPR

Economists have tried to pindown how non-neutral monetary policy is (Christiano et al. 2005). There's a problem measuring the causal effect of policy in controlling for the variation in economic fundamentals that the policy endogenously responds to. The authors argue that central bank announcements can help overcome this identification challenge, because they are an opportunity to isolate unexpected variation in policy.

There is evidence that unexpected information has an effect. One-third of FOMC announcements since 1990 are accompanied by a co-movement of interest rate and stock market changes, the opposite of that we would expect.
 
Central bank announcements simultaneously convey:

  •  Information about monetary policy
  • The central bank's assessment of the economic outlook

The paper disentangles these two components and studies their effect on the economy using a structural vector autoregression estimated on both US and euro area data. It identifies a monetary policy shock through a negative co-movement between interest rate and stock price changes. If the two values move in the same direction, the authors assume it there is an accompanying information shock.

The research

The authors assess the dynamic impact of policy shocks and central bank information shocks using a Bayesian structural VAR and US data. The baseline VAR uses standard monthly variables – interest rates, the price level, economic activity and financial indicators – plus variables reflecting high-frequency financial-market surprises at monetary policy announcements. The methodology is closely related to proxy VARs (Stock and Watson 2012, Mertens and Ravn 2013) that use high-frequency interest rate surprises as external instruments to identify monetary policy shocks (Gertler and Karadi 2015).
 
The model uses 3-month fed funds futures to measure changes in expectations about short term interest rates and the S&P 500 index to measure changes in stock valuation within a half-hour window around FOMC announcements. In this window, the only shocks assumed to be relevant are the ones contained in the central bank announcement.
 
For the US data, the authors find that the direction of the stock market response within half an hour of the policy announcement is highly informative about the response of the economy in the months afterwards. The effects of an unanticipated interest rate increase accompanied by a stock price decline are very different from the effects of an unanticipated interest rate increase accompanied by a stock price increase.

  • An interest rate increase accompanied by a stock price decline (monetary policy shock): This leads to a significant contraction in output and a tightening of financial conditions.
  • An interest rate increase accompanied by a stock price increase (central bank information shock): This leads to a significantly higher price level and an improvement in financial conditions.

The same pattern is true of euro area data, using European Central Bank policy announcements.
 
The authors use a New Keynesian macroeconomic model (Gertler and Karadi 2011) with a central bank communication policy, to conclude that conclude that financial frictions play a prominent role in the transmission of monetary policy shocks, and that the response to information shocks are broadly consistent to a financial asset-valuation shock.

Conclusion

As central bank information shocks have a macroeconomic impact, this is evidence that central bank communication is economically relevant. Disregarding these shocks can lead to biased measurements of monetary non-neutrality.

References

Christiano, L J, M Eichenbaum, and C L Evans (2005), “Nominal Rigidities and the Dynamic Effects of a Shock to Monetary Policy”, The Journal of Political Economy 113: 1–45.
Gertler, M and P Karadi (2011), “A Model of Unconventional Monetary Policy”, Journal of Monetary Economics 58: 17–34.
Gertler, M and P Karadi (2015), “Monetary Policy Surprises, Credit Costs, and Economic Activity”, American Economic Journal: Macroeconomics 7: 44–76.
Mertens, K and M O Ravn (2013), “The Dynamic Effects of Personal and Corporate Income Tax Changes in the United States”, The American Economic Review 103: 1212–1247.
Stock, J H and M W Watson (2012), “Disentangling the Channels of the 2007–09 Recession”, Brookings Papers on Economic Activity 2012: 81–135.


Words are the new numbers: A newsy coincident index of the business cycle

Leif Anders Thorsrud, Norges Bank

In real time the main measure of economic activity, GDP growth, cannot be observed. More timely indicators, like financial and labour market data, are monitored closely. This has three drawbacks:

  • The relationships between timely indicators and GDP growth are inherently unstable (Stock and Watson 2003). State-of-the-art models underperform when economic conditions change rapidly, especially notable in the Great Recession (Alessi et al. 2014).
  • There is not much high-frequency data reflecting the broader economy. The type of data from which coincident indexes are constructed is mostly financial. Understanding why an index changes might be as important as the movement itself, as reflected in the broad coverage of various data in monetary policy reports and national budgets, but that broad data is not easy to find in this context.
  • Agents use a plethora of high-frequency information to guide their actions. Media news might matter more than data from professional data providers, because it can reach a broad population of economic agents and alleviate informational frictions (Sims 2003, Peress 2014, Larsen and Thorsrud 2017).

The author's hypothesis is that if a newspaper provides a relevant description of the economy, the more news there is about a given topic, the more likely it is that this topic represents something of importance for the future of the economy.

The research

The research uses a long sample of the all articles in Dagens Næringsliv, the largest and most read business newspaper in Norway, and quarterly GDP growth in Norway. He estimates a latent daily coincident index using a Bayesian time-varying Dynamic Factor Model, mixing daily data and quarterly GDP growth.
 
The unstructured newspaper data is classified into 80 tone adjusted topic frequencies that vary in intensity across time, using natural language processing (Tetlock 2007).
 
The author adds a Latent Threshold Model (Nakajima and West 2013), which adds a threshold mechanism for the time-varying factor loadings. This explicitly takes into account that the relationship between the latent daily coincident index and the indicators used to derive it might be unstable: for example, the stock market is a good predictor of growth in some periods, but not others.

Conclusion

The newspaper data classifies the phases of the business cycle with almost perfect accuracy, outperforming coincident indexes based on daily and monthly economic variables.
 
In out-of-sample forecasting experiments, the model produces nowcasts that are competitive with the performance of official Norges Bank nowcasts and a state-of-the-art forecast combination system, especially around economic turning points.

References

Alessi, L, E Ghysels, L Onorante, R Peach, and S Potter (2014), "Central Bank Macroeconomic Forecasting During the Global Financial Crisis: The European Central Bank and Federal Reserve Bank of New York Experiences", Journal of Business & Economic Statistics 32(4): 483–500.
Larsen, V H and L A Thorsrud (2017), "Asset returns, news topics, and media effects", Norges Bank working paper 2017/17.
Peress, J (2014), "The media and the diffusion of information in financial markets: Evidence from newspaper strikes", The Journal of Finance 69(5): 2007–2043.
Sims, C A (2003), "Implications of rational inattention," Journal of Monetary Economics 50 (3), 665 – 690.
Stock, J H and M W Watson (2003), "Forecasting output and inflation: The role of asset prices", Journal of Economic Literature 41(3): 788–829.
Tetlock, P C (2007), "Giving content to investor sentiment: The role of media in the stock market", The Journal of Finance 62(3): 1139–1168.
Nakajima, J and M West (2013), "Bayesian Analysis of Latent Threshold Dynamic Models", Journal of Business & Economic Statistics 31(2): 151–164.


Exploiting MIT Shocks in Heterogeneous-Agent Economies: The Impulse Response as a Numerical Derivative

Timo Boppart, Stockholm University and CEPR
Per Krusell, Institute for International Economic Studies, Stockholm University, CEPR
Kurt Mitman, Stockholm University and CEPR

The authors propose a new and simple linearisation method for analysing frameworks with consumer heterogeneity and aggregate shocks and apply it to a standard real business cycle model with neutral (Kydland and Prescott 1982) and investment-specific technology shocks (Greenwood et al. 2000).
 
Business cycle and stabilisation research has a recent focus on heterogeneity across households, because:

  •  Inequality may influence macroeconomic aggregates. Marginal propensities to make decisions vary substantially, for example regarding the permanent income hypothesis (Johnson et al. 2004). Representative-agent models risk overlooking these important channels. Also, extreme inequality can lead to social unrest (Piketty 2014).
  • Economists and policymakers may be interested in understanding the distributional consequences of aggregate shocks. For example, a small aggregate decline in hours may mask substantial heterogeneity in unemployment risk across individuals.
  • The representative-agent nature of most of macroeconomic modeling, like any other assumption, needs to be examined from a robustness perspective. People in the real world are heterogeneous, and this can simply be viewed as an effort at examining the robustness (and perhaps inappropriateness) of the benchmark model.

But it is difficult to solve dynamic models with aggregate shocks with consumer heterogeneity, though Reiter (2009, 2010) and Ahn et al. (2017) among others attempt it.
 
The authors argue that computer process time means computational difficulty is no longer an excuse when abstracting from heterogeneity. But providing even better and more easy-to-use tools—just like in engineering science—and the present paper is one such effort.

The research

The authors present an easy-to-use linearization technique. Unlike a Taylor expansion, this does not use analytical derivatives. Instead, it is built on recursive methods whereby aggregates and prices are expressed as (linearised) functions of the state.
 
In the first step, they solve for a steady state with entirely standard, and fully nonlinear, methods. The authors then study the equilibrium response to a single, small 'MIT shock' carefully. This impulse response path is treated as a numerical derivative in sequence space, which provides a linearised solution directly using this path.
 
The only nontrivial tool necessary for implementing the solution method is value-function iteration, which is necessary for solving for the steady state, also the key tool in solving for the transition equilibrium.

Conclusion

The authors claim that the method is significantly easier to use than existing methods and can be done by advanced undergraduates in economics. Despite this, they argue that it would be feasible to extend the methods to more complex settings, for example with frictions in price and wage setting, and consumption habits.

References

Ahn, S, G Kaplan, B Moll, T Winberry, and C Wolf (2017), "When Inequality Matters for Macro and Macro Matters for Inequality", in NBER Macroeconomics Annual 32, University of Chicago Press.
Greenwood, J, Z Hercowitz, and P Krusell (2000), “The role of investment-specific technological change in the business cycle”, European Economic Review 44(1): 91- 115.
Johnson, D S, J A Parker, and N S Souleles (2004), “The Response of Consumer Spending to the Randomized Income Tax Rebates of 2001”, Bureau of Labor Statistics, Princeton University, and University of Pennsylvania.
Kydland, F E, and E C Prescott (1982), “Time to build and aggregate fluctuations”, Econometrica: 1345-1370.
Reiter, M (2009), "Solving heterogeneous-agent models by projection and perturbation", Journal of Economic Dynamics and Control 33(3): 649-665.
Reiter, M (2010), "Approximate and Almost-Exact Aggregation in Dynamic Stochastic Heterogeneous-Agent Models", IHS working paper 259.


International Macroeconomics and Finance

Foreign Shocks as Granular Fluctuations

Julian di Giovanni, Universitat Pompeu Fabra, CREI and CEPR
Andrei A Levchenko, University of Michigan and CEPR
Isabelle Mejean, Ecole Polytechnique and CEPR

The structure of production is increasingly international, as supply chains overlap with country borders. In theory foreign shocks, even if they are purely aggregate, would affect firms differently depending on their international linkages.
 
The authors start from the assumption that the largest firms are the ones responsible for the bulk of international trade linkages in a typical economy (Freund and Pierola 2015). While only a minority of firms have direct trade linkages with foreign countries, they account for a large share of aggregate economic activity, and have an effect on international shock transmission.
 
This draws on previous research on the propagation of shocks in production networks that began with Carvalho (2010) and Acemoglu et al. (2012), and the importance of large firms in aggregate fluctuations (Gabaix 2011, di Giovanni et al. 2014, Carvalho and Grassi 2015).

The research

The analysis draws on a dataset covering the universe of French firm sales and country-specific imports and exports between 1993 and 2007, using a quantitative multi-country multi-sector model with heterogeneous firms. The model is calibrated to observed firm-level information for France, and to the sector-level information for France’s trading partners from the World Input-Output Database (WIOD).
 
The research uses a quantitative multi-country model in which French firms exhibit the observed joint distribution of size, importing, and exporting. The authors simulate a 10% productivity shock, and a 10% foreign demand shock for French goods. They examine both a global shock to all the countries other than France, and a shock to Germany, as it is one of France’s most important trading partners, and express results directly in terms of elasticities.
 
The model is built on the assumption that the response of GDP to a foreign shock is the sum of the average response of all firms to that shock, and the covariance across firms between sensitivity to that shock and relative size. If there is a single representative firm in the model, the covariance term is irrelevant. When firms are heterogeneous in both size and sensitivity to foreign shocks, part of the impact of a foreign disturbance on GDP is due to the covariance term.

Because the foreign shocks affect predominantly the largest firms in France, they lead to aggregate – granular – fluctuations. The estimation exercise serves two purposes:

  • The first is to provide econometric evidence that foreign shocks transmit to French imported input-using firms. We indeed show that internationally-connected firms react significantly more to foreign shocks than non- connected firms.
  • The second product of this exercise is an estimate of the demand elasticity faced by firms.

Conclusion

In the baseline calibration, French GDP increased by 3.2% following 10% global productivity shock (0.32 elasticity). The impact of a German productivity shock was only an aggregate elasticity of 0.06. The elasticities of French GDP to a foreign demand shock are an order of magnitude smaller. Unlike foreign productivity, the foreign demand shock does not lower the costs of production in France. Firm-level econometric evidence shows that firms importing intermediate inputs are significantly more responsive to foreign input price shocks.
 
Covariance accounts for 31-34% of the overall aggregate elasticity for the productivity counterfactual, and 27-32% in the demand shock counterfactual.
 
The authors conclude that capturing the positive association between size and international linkages is essential for understanding the firm-level and aggregate international transmission of business cycle shocks.

References

Acemoglu, D, V M Carvalho, A Ozdaglar, and A Tahbaz-Salehi (2012), “The Network Origins of Aggregate Fluctuations”, Econometrica 80(5): 1977–2016.
Carvalho, V M (2010), “Aggregate Fluctuations and the Network Structure of Intersectoral Trade,” Mimeo, CREi and Universitat Pompeu Fabra.
Gabaix, X (2011), “The Granular Origins of Aggregate Fluctuations”, Econometrica 79(3): 733–772.
di Giovanni, J, A A Levchenko, and I Mejean (2014), “Firms, Destinations, and Aggregate Fluctuations”, Econometrica 82(4): 1303–1340.
Carvalho, V M and B Grassi (2015), “Large Firm Dynamics and the Business Cycle”, CEPR discussion paper 10587.


Real Interest Rates and Productivity in Small Open Economies

Tommaso Monacelli, IGIER, Università Bocconi and CEPR
Luca Sala, Università Bocconi
Daniele Siena, Banque de France

In emerging market economies (EMEs) capital inflows typically lead to output and asset price booms, appreciating real exchange rates, and excessive credit growth (Blanchard et al. 2016). In contrast, large capital ináows in the European periphery have been associated to current account imbalances, loss of competitiveness, and a slowdown in productivity, attributed to misallocation effects (Reis 2013, Gopinath et al. 2017).

The authors investigate the effects of capital inflows on aggregate productivity in both EMEs and advanced economies (AEs), by building a simple business cycle model that accounts for both.
 
They model capital inflows as exogenous variations in (world) real interest rates, for two reasons: 

  • The heated debate on the effects of ultra-easy monetary policy in the advanced economies for capital flow spillovers (Rey 2013, Miranda-Agrippino and Rey 2015).
  • The role of real interest rates fluctuations for EME business cycles (Neumeyer and Perri 2005, Uribe and Yue 2006).

In EMEs, the real interest rate is countercyclical, and negatively correlated with productivity, while in AEs, real interest rates are procyclical, and positively correlated with productivity.

The research

The authors build a unified theoretical framework for both groups of small open economies, in two steps.

  • A misallocation model. A simple business cycle model of an open economy, with financial imperfections and firm heterogeneity in productivity (leading to misallocation of production). This would seem to be more suited to EMEs than AEs (Restuccia and Rogerson 2017), but in this case an exogenous rise (fall) in the real interest rate leads to a rise (fall) in productivity, and misallocation leads to a dampening of the effects of real interest rate shocks on output.

The authors conclude that a model characterized by financial frictions and misallocation of production seems better suited to account for business cycle dynamics in AEs than in EMEs.

  • An original sin model. They adapt the misallocation model so that countries could not borrow in their own currency. This model reflects the EMEs narrative, as it can generate both amplification of output fluctuations and a negative (positive) effect of higher (lower) real interest rates on productivity. The condition that allows to obtain the latter results is that periods of higher (lower) real interest rates be also periods of tightening (loosening) financial conditions. 

Conclusion

The introduction of an original sin channel means that higher (lower) real interest rates lead to a depreciation (appreciation) of the real exchange rate, fitting the observed data for EMEs. The results suggest that the role of firm heterogeneity and market concentration is crucial in understanding the macroeconomic effects of capital inflows.

References

Gopinath, G, A Kalemli-Ãzcan, L Karabarbounis, and C Villegas-Sanchez (2017), "Capital Allocation and Productivity in South Europe", The Quarterly Journal of Economics 132(4): 1915-1967.
Miranda-Agrippino, S, and H Rey (2015), "World Asset Markets and the Global Financial Cycle", CEPR discussion paper 10936.
Neumeyer, P A and F Perri (2005), "Business cycles in emerging economies: the role of interest rates", Journal of Monetary Economics 52(2):345-380.
Reis, R (2013), "The Portuguese Slump and Crash and the Euro Crisis", Brookings Papers on Economic Activity 46(1): 143-210.
Restuccia, D, and R Rogerson (2017), "The Causes and Costs of Misallocation", Journal of Economic Perspectives 31(3): 151-174.
Rey, H (2013), "Dilemma not trilemma: the global cycle and monetary policy independence", Proceedings of the Economic Policy Symposium, Jackson Hole.
Uribe, M, and V Z Yue (2006), "Country spreads and emerging countries: Who drives whom?" Journal of International Economics 69(1): 6-36.


Interest Rate Uncertainty as a Policy Tool

Fabio Ghironi, University of Washington and CEPR
Galip Kemal Ozhan, University of St. Andrews

A recent unorthodox policy experiment of the Central Bank of the Republic of Turkey used interest rate uncertainty as a policy tool in an open economy to control the capital account, and to attract Foreign Direct Investment (FDI) while discouraging short-term inflows. There is no structural model that studies this, and the authors attempt to fill that gap.
 
There have been large changes in the portfolio flows to Emerging Markets Economies (EMEs) between 2006 and 2014. The surge in the size and volatility of inflows can cause financial stability concerns and inflationary pressures (Obstfeld 2015), which will influence the conduct of monetary policy in EMEs, not just Turkey.
 
In Turkey, in response to intense capital inflows, the interest rate corridor widened from below to discourage carry trade and channel inflows towards long-term FDI, and in response to powerful capital outflows, the interest rate corridor was narrowed by raising overnight borrowing rates to prevent excessive outflows (Başçı, 2012).

The paper: 

  • Differentiates from the literature that study the effects of uncertainty shocks on economic activity by investigating the implications of using uncertainty as a policy tool, instead of a taken-as-given and studies the effects of uncertainty on different types of capital flows. Fernández-Villaverde et al. (2011) introduce uncertainty in Mendoza (1991) and abstract from monetary features, but this is the first that studies implications of uncertainty in an international macro model with incomplete international financial markets, deviations from PPP, price rigidities and investment dynamics.
  • Differentiates from the studies capital flows to EMEs integrated into international financial markets by distinguishing FDI from short-term flows.
  • Contributes to the literature that studies the interdependence between macroprudential and monetary policy, by shedding light on the effectiveness of using interest rate volatility as a new macroprudential tool that influences financial variables through international markets.

The research

The authors build a New Keynesian Open Economy framework. It is a two-region macroeconomic model (EME and the Rest of the World) with incomplete international financial markets and deviations from PPP. The model is augmented with interest rate uncertainty shocks, which the EME central bank uses to discourage short-term capital flows and attract FDI, while targeting inflation and output stabilisation. It differentiates from existing work by carefully distinguishing short-term capital flows from FDI.

Three key channels of uncertainty transmission operates in affecting the external account:

  • A precautionary savings channel. This is seen in short-term capital flows. In response to increased risk in the EME, agents shift away from EME debt, and smooth consumption using RoW securities. This contributes to a fall in the short-term component of the EME current account.
  • A precautionary pricing channel. An increase in the uncertainty results in an upward pricing bias for EME firms. This happens when the EME is subject to pricing frictions in production sector. Firms adjust their prices higher than they would otherwise do. and contribute to a fall in output and rental rates of physical capital obtained from domestic and international markets. Lower profits for RoW investors in the EME productive capital induces a decline in the FDI coming into the EME.
  • A growth option channel. FDI is irreversible through a time-to-build condition, sunk costs are offset by future profits from investing in the EME, and increased uncertainty in the interest rate contributes to an amplification of FDI entering the EME.

When there is increased uncertainty in Home economy (EME), agents shift away from EME debt when smoothing consumption as a result of the precautionary saving motives, leading to an increase in the short-term component of the current account.
 
When production is subject to pricing rigidities, firms adjust their prices to higher levels than they would otherwise do due to asymmetry in their profit function. Increase in markups together with fall in consumption contribute to a fall in FDI received in the EME. This precautionary pricing motive contributes to an increase in the FDI component of the current account, against the policymaker’s will.
 
Finally, after introducing time-to-build condition for FDI coming into the EME, RoW agents increase their FDI in response to increased uncertainty. Because investment costs are sunk and profit from investing in the EME is not constrained, RoW agents increase their FDI in response to interest rate uncertainty in the EME. This is an outcome in line with the policymaker’s goal when using uncertainty as a policy tool.

Conclusion

The authors conclude uncertainty can be used to adjust the external account, but at the expense of higher inflation and lower output.

References

Başçı, E (2012), “Monetary Policy of Central Bank of the Republic of Turkey after Global Financial Crisis”, Insight Turkey 14(2).
Obstfeld, M (2015), “Trilemmas and Tradeoffs: Living with Financial Globalization”, in Global Liquidity, Spillovers to Emerging Markets and Policy Responses, edited by C Raddatz, D Saravia and J Ventura, Central Bank of Chile.
Fernández-Villaverde, J, P Guerron-Quintana, J F Rubio-Ramírez, and M Uribe (2011), “Risk matters: the real effects of volatility shocks”, The American Economic Review 101(6): 2530–61.
Mendoza, E (1991), “Real business cycles in a small open economy,” The American Economic Review 81:797-818.


Understanding Global Confidence Cycles

Jongrim Ha, World Bank
Raju Huidrom, IMF
M Ayhan Kose, World Bank and CEPR
Franziska L Ohnsorge, World Bank
Naotaka Sugawara, World Bank

Measures of business and consumer confidence are used for monitoring and forecasting activity. Some studies conclude that confidence helps forecast output and consumption. Confidence plays an important role in explaining business cycles (Angeletos et al. 2014). and their cross-border transmission.
 
But systematic empirical analysis of the role of confidence is limited to advanced economies. Well-known cross-country data sources of confidence provided by the OECD (2012) and European Commission (2016) cover business confidence for only 42 countries and consumer confidence for only 39 countries, most of which are advanced economies. The EC provides harmonized data for 31 and 32 European countries on business and consumer confidence, respectively.
 
The surveys used in other regions are not cross-country consistent, they ask different survey questions, they are reported at different frequencies, with different scales, and some are seasonally adjusted while others are not. Therefore the authors standardise confidence measures along these dimensions to minimise deviations among different survey designs.
 
This database covers business confidence for 91 countries—35 advanced economies and 56 emerging market and developing economies (EMDEs). For consumer confidence, it includes 95 countries—36 advanced economies and 59 EMDEs. The time series coverage begins in 1960 and data for most economies are available from the early 2000s. This database is a one-stop repository of confidence measures from multiple sources, and is suitable for cross-country analyses.

The research 

  • The authors first compiled “raw” business and consumer confidence data from: OECD, European Commission, national statistical agencies, central banks, and non-official sources such as universities, research institutions, and private companies.
  • They standardise the raw data to make them comparable across countries. This step includes seasonal adjustment, if the raw series is not seasonally adjusted by sources yet, removal of high-frequency noise, and normalisation to make the series have the same mean and variance across countries.

Business and consumer confidence exhibit sizeable comovement across countries: the global factor explains a sizeable share of the variance of business and consumer confidence, about 47% and 25%, respectively. Across country groups, comovement of confidence is larger among advanced economies than EMDEs.

Conclusion

The results suggest that a global confidence cycle exists. Also, a unit increase in business confidence is followed by a 1.5 percentage point increase in one-quarter-ahead output growth. As the forecast horizon increases, the magnitude of this correlation shrinks but remains statistically significantly positive. Confidence measures also help predict growth in house prices and credit.

References

Angeletos, G-M, F Collard, and H Dellas (2014), “Quantifying Confidence”, NBER working paper 20807.
European Commission (2016), “The Joint Harmonised EU Programme of Business and Consumer Surveys: User Guide".
Organisation for Economic Co-operation and Development (2012), “OECD System of Composite Leading Indicators".


Foreign Currency Loans and Credit Risk: Evidence from US Banks

Friederike Niepmann, Federal Reserve Board and CEPR
Tim Schmidt-Eisenlohr, Federal Reserve Board

Many firms borrow in foreign currencies, particularly in emerging market economies because those loans are cheaper than domestic currency loans. They expose firms to exchange rate risk. The authors study the effect of exchange rate changes on firms’ loan payments.

Financial markets offer instruments to hedge against this risk, but these instruments are costly, and firms often remain unhedged.
 
Currency devaluations have been thought of as enhancing firm performance by increasing the foreign demand for domestic goods, but when the domestic currency depreciates, a firm that has borrowed in that currency has a higher debt burden increases with negative consequences for its economic performance.
 
The authors show that existing micro-level evidence on the relevance of this balance sheet channel is limited and mixed, because the balance sheet data is from a small set of countries.
 
Aguiar (2005) uses Mexican balance sheet data, finding that firms with heavy short-term foreign debt exposure had substantially lower investments after a large devaluation. Kim et al. (2015) report that firms’ economic performance declined more for firms with foreign currency debt during the 1997-1998 Korean crisis. Bleakley and Cowan (2008) did not find similar evidence.
 
The paper instead uses a larger dataset which is now available thanks to bank stress tests: US bank regulatory filings between 2014 and 2016, which covers firms in 105 countries. It differs from previous research:

  • Previous papers have mainly focused on a single country or a small set of countries during large devaluations.
  • The detailed loan-level data with broad country coverage allow for a robust estimation with a large number of fixed effects.
  • The data are derived from bank loan portfolios. This provides evidence that exchange rate fluctuations (exchange rate risk) translate into credit losses (credit risk) for banks.

The research

The loan-level data in the paper is from Y-14 filings that banks subject to stress testing by the Federal Reserve have to file on a quarterly basis, compiled between Q4 2014 and Q2 2016. They are composed of corporate loans and leases with a loan amount of at least $1 million, and state whether, and how long, they have been past due. During this period the dollar appreciated.
 
The data also shows the location of the borrower and the currency denomination of the loan as well as loan size, maturity, and the interest rate, among other characteristics. This is new data: 84% of the loans are not syndicated, meaning that the majority of loans in the dataset cannot be found in syndicated loan databases, often the data source for previous studies.

The data shows that:

  • 75% of loans to non-US residents are denominated in a different currency than the borrower’s home currency.
  • Foreign currency loans are around 151 basis points cheaper, and more prevalent in countries with higher inflation, lower exchange rate volatility, and a higher credit-to-GDP ratio.
  • Firms in industries with a higher share of foreign sales and a lower share of foreign assets are more likely to borrow a foreign currency.
  • Foreign currency loans are larger and of shorter maturity.

The detail in the data allows the authors to identify a set of variables that we can use to control for firms’ selection into foreign currency borrowing, as exchange rates are correlated with macroeconomic variables that also drive firm performance. They test for the balance sheet channel by comparing firms with foreign debt with firms with domestic debt in the same country, industry, quarter, and with the same bank-internal rating.

Conclusion

The research shows that a 10% percent depreciation of the local currency increases the probability that a firm becomes past due on its loans by 69 to 160 basis points more for firms with foreign currency debt compared with firms with domestic currency debt. This effect mainly stems from local currency depreciations and is stronger for firms in industries with a smaller share of foreign sales.
 
Applying these results to the total foreign currency loans of US banks indicates that a 10% appreciation of the dollar causes an increase in late loan payments of $2.5 billion for these banks.

References

Aguiar, M (2005), “Investment, devaluation, and foreign currency exposure: The case of Mexico”, Journal of Development Economics 78 (1): 95–113.
Bleakley, H, and K Cowan (2008), “Corporate dollar debt and depreciations: much ado about nothing?” The Review of Economics and Statistics 90(4): 612–626.
Kim, Y J, L L Tesar, and J Zhang (2015), “The impact of foreign liabilities on small firms: Firm-level evidence from the Korean crisis”, Journal of International Economics 97(2): 209–230.


Macroeconomics and Growth

Cultural Values and Productivity

Andreas Ek, London School of Economics

The author investigates one of the most controversial potential causes of the large cross-country differences in measured Total Factor Productivity (TFP). Following Landes (1998), he investigates whether part of the large unexplained TFP differences is due to culture.
 
Guiso, Sapienza, and Zingales (2006) and define “culture” as those values and beliefs that are passed down fairly unchanged from generation to generation. To investigate this, he studies differences in labor productivity across groups of workers defined by country of birth or ancestry, who now live and work in Sweden.
 
Development accounting has found human capital to play only a limited role in productivity, but this view is now being challenged in a recent contribution by Hendricks and Schoellman (2018), who used a comparison of pre and post migration wages for US immigrants to suggest that human capital differences may explain up to two thirds of cross-country differences in income.
 
The study uses migrants because the other factors that may be important for labour productivity, such as institutions, technology, and geography, no longer apply. Therefore. He can ask which human capital-related country characteristics best explain differences in estimated labour productivity in the destination country.
 
The estimation relaxes the assumption of perfectly competitive labour markets, because the survey data means that differences in labour productivity can be estimated directly in firm-level production functions, instead of using wages. This means it can also ask which underlying human capital-related country characteristics best explain differences in estimated labour productivity.

The research

Estimating differences in productivity using migrants means that the author can generate country-of-origin-specific labour productivity measures unrelated to institutional, technological, and geographical factors in the origin countries. To do this, he employs Swedish register data that matches employees to their employers. This allows him to estimate firm-level production functions with heterogeneous labour.
 
In the model, firms produce (revenue) value added by combining capital and labour of different types in a Cobb-Douglas production function. The total labour input of a firm is aggregated as a CES function of the number of labor input units of each type, but defining types of labour by the worker's country of origin.
 
The estimation adjusts for differences in education and experience at the individual worker-level. The country-specific parameters capture relative productivity conditional on education and experience.

Using OLS regressions, the author finds:

  • A productivity difference of roughly 40–55 percentage points can be attributed to “culture”, relative to the productivity of the baseline group, native-born males.
  • After including these cultural differences, measures of educational quality (Schoellman 2012) are insignificant as a driver of cross-country differences in human capital.
  • Shares of populations with a given level of educational attainment, life expectancy at birth, and fertility rates lack a consistent or significant relationship with labor productivity.
  • The effect persists in a second generation of migrants that have been through the Swedish schooling system. Robustness exercises suggest that the persistence is not driven by positional discrimination, genetics, (non-cultural) socioeconomic factors, or other traits passed down by parents.

In a further exercise, the author uses these estimates to adjust human capital stocks in a development accounting exercise, and find this adjustment decreases the amount of unexplained variation in GDP per capita across countries by 14–19 percentage points relative to “traditional” development accounting, in which human capital stocks are calculated solely based on educational levels and pecuniary returns to education.

Conclusion

The paper finds that measures of cultural values have the greatest explanatory power. Clearly, finding an appropriate definition of culture is problematic: the paper uses measures of culture that were constructed by Inglehart, Baker, and Welzel, based on data from the World Values Survey. From responses to the survey, the values that are most important in driving the relationship between productivity and cultural values are autonomy (in contrast to authoritative values) and, to a lesser extent, trust. The paper notes that autonomy, the main driver of 'cultural' differences, is one of three cultural factors that Landes (1998) proposes as fundamental causes for why economic growth took off in western Europe.
 
This means that an alternative title for the paper could have been "The importance of autonomy and trust for labour productivity".

References

Guiso, L, P Sapienza, and L Zingales (2006), "Does culture affect economic outcomes?" Journal of Economic Perspectives 20(2): 23–48.
Hendricks, L, and T Schoellman (2018), "Human capital and development accounting: New evidence from immigrant earnings", Quarterly Journal of Economics, forthcoming.
Landes, D S (1998), The wealth and poverty of nations: why some countries are so rich and some so poor, W W Norton.
Schoellman, T (2012), "Education quality and development accounting", The Review of Economic Studies 79(1): 388–417.


From Weber to Kafka: Political Instability and the Rise of an Inefficient Bureaucracy

Gabriele Gratton, University of New South Wales
Luigi Guiso, EIEF and CEPR
Claudio Michelacci, EIEF and CEPR
Massimo Morelli, Bocconi University and CEPR

The authors propose a connection between bureaucratic efficiency and the legislative activism of politicians, with two "steady states" of bureaucracy, defined by Max Weber and Franz Kafka:

  •  Weber (1922) argued that a well-functioning bureaucracy reduces organization and transaction costs, guarantees order, maximises efficiency, and eliminates favoritism.
  • Kafka’s novels portray the Habsburg Monarchy administration at the beginning of the 20th century, which led to the kingdom’s stagnation. For example, the payment of a tax in Vienna required the involvement of 27 public officials, and the cost of collecting taxes in Dalmatia exceeded the tax revenue (MacMillan 2013).

They argue that when bureaucratic institutions become more inefficient, laws are implemented slowly and their quality is hard to learn. Incompetent politicians have strong incentives to try to acquire a reputation as skilful reformers by passing excessive legislation that the bureaucracy will implement slowly, if at all, and make the bureaucracy even more inefficient.
 
The paper speculates that this accounts for many periods in political history, notably the performance of Italian government since 1992, which has seen more legislative activism, but a worsening in the quality of laws and a deterioration of bureaucratic efficiency associated with a sharp fall in TFP growth.

The research

The model treats bureaucracy as a technology that implements the reforms initiated by politicians, who vary in their ability. It sets out the conditions for the existence of a Weberian steady state, which has efficient bureaucracy and little incentive to propose useless reforms, and a Kafkian steady state, with a high frequency of useless reforms and an inefficient bureaucracy.
 
The focus of the model is on the supply-side feedback mechanism: where bureaucracy is more inefficient, politicians, especially the less competent, tend to supply more laws and reforms. Their competence is private information, fully revealed to the public only if the reform is implemented by the end of the legislative term. In equilibrium, competent politicians never propose bad reforms, while the incompetent face a trade-off: initiating a bad reform that remains uncompleted by the end of the mandate signals competence, but if the reform is actually implemented, it reveals the incompetence of its proponent.
 
The authors test the model using the observable facts about Italy's bureaucracy since 1992. They note that new legislation contains errors and even incomplete or inconsistent sentences (Zaccaria 2011). The International Country Risk Guide, an index created by the PRS group, shows an improvement in efficiency before 1992, and then a collapse. Fro example, queuing rates rose by 136% at the post office, by 78% at the registry office, and by 48% at the public health service. Between the end of world War II and 1992, Italy had recorded average annual TFP growth of 2.2%. Afterwards, excluding the great recession, the growth has averaged 0.4%.
 
They also measured the competence of MPs based on their ability to earn market income, as in Gagliarducci and Nannicini (2013), and record their legislative activity.

Conclusion

The paper support for deterioration to a Kafka equilibrium in a detailed analysis of Italy's recent history. It concludes that the sharp increase in political instability after the end of the Cold War produced a sharp increase in legislative activism, accompanied by a deterioration in bureaucratic efficiency and poor aggregate performance. At this time, the relative sponsorship of laws by incompetent politicians increased.

References

Gagliarducci, S and T Nannicini (2013), “Do Better Paid Politicians Perform Better? Disentangling Incentives from Selection”, Journal of the European Economic Association 11 (2): 369–398.
MacMillan, M (2013), The War that Ended Peace: The Road to 1914, Penguin Canada.
Weber, M (1922) 1978, Economy and Society: An Outline of Interpretive Sociology, University of California Press.
Zaccaria, C. (2011): "La buona scrittura delle leggi", Camera dei Deputati.