|
Business
Cycles A
CEPR conference on 'New Approaches to the Study of Business Cycles'
was held in Madrid on 30/31 January 1998. The conference, which was
organised by Lucrezia Reichlin (ECARE, Université Libre de Bruxelles and CEPR),
assembled recent contributions on the measurement and theory of the
business cycle from different methodological perspectives. One theme
of the conference was the relationship between micro- and
macroeconomic dynamics; another was trend-cycle decompositions for
macroeconomic indicators. There were also papers on equilibrium
business-cycle theory and simulation results. In all, 12 papers were
presented. 'Capital
accumulation with incomplete markets' was presented by Francesc
Obiols-Homs (ULB and ECARE) and written jointly with Albert Marcet (Universitat Pompeu Fabra, London Business School and
CEPR) and Philippe Weil (ULB,
ECARE, CEPR and NBER). The authors asked whether the lack of perfect
insurability implies that there will always be precautionary savings
in the steady state. When markets are complete, economic agents can
get perfect insurance against major fluctuations in their wealth and
do not therefore need to save for precautionary motives. When markets
are incomplete, however, precautionary savings are likely to ensue,
thus allowing both a higher rate of capital accumulation and faster
growth. Previous research has suggested that, in an economy with a
fixed labour supply, the availability of precautionary savings implies
an aggregate stock of capital around 2-3% larger than under complete
markets. Consequently, if incomplete markets were always characterized
by positive aggregate precautionary savings, a better allocation of
risk would lead to an increase in welfare, but would cause output to
fall. The authors argued that the answer to their question must rely heavily on the elasticity of, and the wealth effects on, labour supply. When labour supply is endogenous, wealth effects will shrink the size of incomplete, relative to complete, market economies. In these
circumstances, the existence of precautionary savings will depend on
aggregate ex-post wealth,
and on the complementarity between the input creating the wealth
effect and other productive inputs. For a sufficiently high employment
rate, the wealth level of employed agents will be such that the
aggregate labour supply under incomplete markets will be smaller than
under complete markets. Hence incomplete markets per
se do not necessarily imply either the existence of precautionary
savings or an increase in the size of the economy relative to complete
markets. Instead, if there were complementarities in production, and
some of the inputs were able to create wealth effects, then completing
the markets would increase not only welfare, but also output. This
result called for a cautionary approach to the design of an optimal
taxation scheme. A positive tax on capital income would be likely to
move the economy even further from the optimal state. The authors' own
simulations indicated that their results were robust with respect to
the introduction of substitution effects implied by a particular form
of aggregate uncertainty. Empirical
studies of the financial decisions of firms have revealed important
differences in the behaviour of large and small firms. In 'Monetary
policy and the financial decisions of firms', Thomas
F Cooley (University of Rochester) and Vincenzo
Quadrini (Universitat Pompeu Fabra) developed a general
equilibrium model to help to explain this feature, among others. In
the model, the capital structure of firms changes endogenously over
time and over the business cycle as a result of the firms' financial
decisions, and in response to idiosyncratic technology shocks, as well
as to aggregate real and monetary shocks. One of the objectives of the
paper, which was presented by Quadrini, was to provide a general
equilibrium framework for describing the transmission of monetary
shocks to the economy, where the effect of the shocks was to change
liquidity levels in the economy. Four
interdependent sectors of the economy are considered: households,
firms, financial intermediaries and mutual funds. There is a continuum
of firms which, in each period, are heterogeneous in their initial
equity capital. The firms' equity is endogenous and changes over time
as profits are reinvested, this being the only source of increased
equity. Firms finance their working capital by borrowing (up to a
maximum defined by the firm's liquidation value) from financial
intermediaries through a standard debt contract based on a
non-contingent interest rate. The
authors derived several results. First, they determined the
industry-wide dynamics and the equilibrium distribution of firms.
Second, their model predicted that small and large firms would respond
differently to aggregate real and monetary shocks. Small firms were
found to be more sensitive to monetary shocks, whereas the response to
real shocks was slightly greater for big firms. Third, the behaviour
of households and firms had business-cycle implications. Firm
heterogeneity was found to generate more persistence in the economy's
response to monetary and real shocks, although the real effects of
monetary policy were found to be very small. Fourth, monetary shocks
led to considerable volatility in financial markets, particularly in
stock-market returns. Monetary policy shocks and their effects on
stock-market fluctuations could thus serve as an explanation for the
puzzle of excess volatility of stock returns for representative-agent
economies. The
model was calibrated to generate quantitative implications broadly
consistent with aggregate data. Sumru
Altug (Koç University and CEPR) noted, however, that it did not
test whether this was true also of firms at the individual level.
Since firms' decision rules depended on a specific set of observable
state variables, a simple test could determine their significance for
the firms' behaviour. International
agreements – such as the Maastricht and Amsterdam Treaties – to
formalize stabilization pacts for debt and public deficits arise from
the perception that national deficit spending imposes negative
externalities on foreign countries. The possibility of such
externalities is not new – interest-rate transmission channels, for
instance, have long posed such a threat. In EMU, however, their
potential will be accentuated by the creation of the European Central
Bank, which may be called upon to bail out a heavily indebted member
country, thereby ultimately threatening union-wide price stability. In
a paper written with Soren Bo
Nielsen (Copenhagen Business School) and entitled 'Is coordination
of fiscal deficits necessary?', Harry
Huizinga (CenTER, Tilburg University, and CEPR) examined the scope
for fiscal rules to restrict government borrowing in the case where
government financing stems from capital income taxation. The paper
stressed interest-rate and tax externalities as a rationale for
international restrictions on national budget deficits. In a
two-period model, the authors generalized the existing literature by
allowing for public expenditures in each period to be financed by
distortionary taxes on investment and saving, with the possibility of
first-period public deficits. 'Spanish
unemployment and inflation persistence: Are there Phillips
trade-offs?' was written by Juan
J Dolado (Universidad Carlos III, Madrid, and CEPR), J
David Lopez-Salido (Banco de España) and Juan
Luis Vega (European Monetary Institute), and presented by Juan
Dolado. The paper considered the evolution of inflation and
unemployment in Spain during the period 1964–95, including what
appears, at first sight, to be an unemployment-inflation trade-off
close to 1:1 since the mid-1970s. The authors analysed the
implications of hysteresis effects – related in Spain to high firing
costs and long unemployment-benefit duration – and of high-inflation
persistence for both the Phillips trade-off and the sacrifice ratio.
They employed a bivariate VAR model, with both the inflation and
unemployment rates in first differences. The structural innovations
associated with the latter two variables were defined to be the demand
and supply shocks, which were recovered from the estimated VAR
residuals. This
methodology allowed the authors to address a number of relevant
issues: (1) estimation of the short- and long-run effects of both
demand and supply shocks on unemployment and inflation; (2) estimation
of the Phillips curve trade-off; and (3) testing for both the long-
and short-run neutrality of the Phillips curve. They considered three
identification schemes: a real business-cycle model; a
neoclassical-monetarist-rational expectations model; and a Keynesian
model. The monetarist identification scheme was found to be
best-suited for the unemployment-inflation joint dynamics. The authors
were unable to reject the existence of a permanent output loss of half
a percentage point for each percentage point of permanent
disinflation. When the VAR was augmented by a fiscal-policy variable,
however, namely logged government current expenditures (in second
differences), in an attempt to disentangle monetary from non-monetary
demand shocks, the data favoured a transitory trade-off with a
cumulative output loss of about six percentage points of GDP
(notwithstanding the high degree of hysteresis in the Spanish labour
market). None-the-less, the authors argued that the benefits from
lower inflation were positive and similar in both cases. A
claim that the traditional VAR approaches to identification of the
impact of a monetary-policy shock on output mean that the conclusions
are not drawn directly from the data, but rather result from the
imposition of either a causal ordering of the variables, or
contemporaneous or long-run restrictions, formed the starting-point
for the paper by Harald Uhlig
(CenTER, Tilburg University, and CEPR). Entitled 'What are the effects
of monetary policy? Results from an agnostic identification
procedure', the paper proposed an alternative method of directly
imposing sign restrictions on the responses of prices, non-borrowed
reserves and the US federal funds rate (FFR) to a monetary shock. More
specifically, Uhlig assumed that a contractionary shock leads to no
increase in prices, no increase in non-borrowed reserves and no
decrease in the FFR for a certain period (the 'response horizon')
following the shock. Unlike the previous VAR literature, moreover,
this approach sought the identification of a single (monetary) shock
alone, with no restrictions imposed on the response of output, thus
leaving the data to 'speak for themselves'. The
monetary shock was identified from an impulse vector, which minimized
a criterion dependent upon the response horizon, and a function for
'penalizing' responses of price, non-borrowed reserves and FFR with
signs different from those desired. The results, which broadly
confirmed those of previous studies, constituted what Uhlig termed a
'new Keynesian-new classical synthesis': even though the general price
level reacts sluggishly, money has no real effects. More specifically,
he found that: (1) contractionary monetary policy shocks had an
ambiguous effect on real GDP; (2) the GDP price deflator fell only
slowly following a contractionary shock, possibly indicating price
stickiness, while the commodity price index fell more quickly; and (3)
monetary policy shocks accounted for only a small fraction of the
forecast-error variance in prices and, except at horizons shorter than
half a year, in the FFR as well. The
fact that monetary shocks appeared to capture so little variation in
future inflation could be interpreted to mean that US monetary policy
had been largely successful, in that it was predictable. As Uhlig
acknowledged, however, this result could also imply faulty identifying
assumptions – a point taken up in the ensuing discussion. Fabio
Canova (Universitat Pompeu Fabra and CEPR) argued that even where
the identification was concerned only with a single shock and no
restrictions were imposed on the response of real GDP, this did not
get rid of the indeterminacy problem. Some response variables
therefore do need to be restrained; and, if some of the restrictions
turn out to be incompatible with economic reality, the identified
shock becomes meaningless. Canova also pointed to the sensitivity of
Uhlig's method with respect to the specification of both the
impulse-response horizon and the penalty function. None-the-less,
Uhlig had refined the method for identifying a monetary shock and his
results did seem strongly consistent with prior expectations. Vanghelis Vassilatos (IMOP and IOBE) introduced his paper, 'A small
open economy with transaction costs in foreign capital', which was
written with Tryphon Kollintzas
(Athens University, CEPR, IMOP and IOBE). Vassilatos noted that, with
a few notable exceptions (Canada, France, Portugal, Sweden and the
United Kingdom), the successful application of real business cycle
(RBC) models to the US economy had not been accomplished for other
countries. He and Kollintzas were attempting to remedy this by
extending the standard RBC model to incorporate the behaviour of a
small open economy in which access to foreign capital markets was
impeded by transaction costs and in which the public sector was large
and distorting. They calibrated their model on data for the Greek
economy from 1960 to 1992. A second objective was to analyse the
response of the major Greek macroeconomic variables to various
temporary and permanent policy changes, most notably the effects of
foreign transfers and the so-called 'Delors I and II' packages. The
model successfully reproduced several stylized facts of the Greek
business cycle, and the impulse-response analysis predicted that
increases in the GDP share of government consumption would adversely
affect output and factor productivity, and would increase net foreign
asset holdings. A higher GDP share of domestic transfers would have
qualitatively similar, but quantitatively smaller, effects. Increases
in the GDP share of government investment, however, would raise output
and all kinds of capital but decrease labour. These results suggested
that the relative increases in government consumption and in foreign
and domestic transfers over the last 20 years had worked to the
detriment of the Greek economy, owing to distortions to the incentives
to save and work. The
model proved weak, however, in reproducing labour-market behaviour. In
the authors' view, this was because of the high degree of
centralization of the Greek labour market. Graziella
Bertocchi (Università di Modena and CEPR) concurred and suggested
an alternative modelling strategy, based on the fact that capital
controls can be introduced as protection for the bargaining power of
labour. Oved Yosha
(Tel-Aviv University) explored the possibility of incorporating an
'optimally behaving' public sector, but the authors argued that the
model's results were qualitatively insensitive to the role of
government consumption in preferences. Fabio
Canova commented that the growth-oriented nature of the 'Delors
packages' was not fully captured by a model focusing primarily on
short- and medium-term fluctuations. 'New
approaches for modelling dynamics of large cross-sections' was written
by Christophe Croux (ECARE), Mario
Forni (Università di Modena), Marc
Hallin (ECARE and ISRO), Marco
Lippi (Università di Roma and ECARE) and Lucrezia
Reichlin (ECARE and CEPR), and was presented by Lucrezia Reichlin.
Although the empirical co-movement of macroeconomic aggregates is one
of the few stylized facts of economics, it is – paradoxically –
also one of the least well-documented and understood facts.
'Co-movement' is a loosely used term with many different
interpretations. Moreover, persistent aggregate fluctuations often can
be explained by micro shocks, propagated locally by co-movements of
microeconomic units through input-output relations, spillovers and
other interactions. Observed aggregate fluctuations caused by such
local, rather than aggregate, shocks thus require explanantion through
a different class of macroeconomic models. Accordingly,
the authors offered two lines of analysis. The first developed a
measure of co-movement, which was close to the notion of dynamic
correlation, but which also took into account differences in drifts
and variances. Being defined in the frequency domain, the measure
could be used to study business-cycle as well as short- or long-run
questions. It could also be generalized to provide a summary index of
'cohesion', i.e. the degree of co-movement either within or between
groups of variables (or individuals). The authors provided two
illustrations. First, by analysing the 'local interaction hypothesis'
in a panel of sectoral output data for 450 US manufacturing sectors
since 1958, they demonstrated that both the extent and shape of
cohesion between different sectoral groups conveyed information about
the nature of the shocks and the propagation mechanisms. Second, they
used per capita output data for US states and European countries to
evaluate differences in overall intra-group cohesion. In both
illustrations, bootstrap confidence intervals for the co-movement
measure were computed. In
the second line of analysis, the paper proposed an econometric
framework for studying the propagation of micro shocks (with possibly
local effects) and of aggregate shocks. The model – a dynamic
approximate factor model – generalized the usual (static)
principal-components analysis to a dynamic framework and, more
specifically, to the frequency domain. The authors obtained a
consistent result for their estimator, as well as encouraging
preliminary simulation results. Margherita Borella (University College London) presented 'Stochastic
components of individual consumption: a time series analysis of
grouped data', written with Orazio
Attanasio (University College London and CEPR). The authors noted
that although the well-known dynamic properties of aggregate
consumption had stimulated development of different theoretical models
of consumption behaviour, most empirical studies had focused on some
version of an Euler equation, estimating and testing the structural
model by exploiting the over-identified restrictions implied by such
an equation. Little was known, however, about the stochastic
properties of consumption at the individual level. Previous studies
had either modelled only the labour-market variables or focused solely
on the dynamics of purely idiosyncratic components, treating aggregate
shocks and business-cycle patterns as nuisance parameters to be
eliminated in preliminary regressions. The
authors therefore proposed a new methodology for analysing the
time-series properties of individual consumption expenditure. Their
aim was to characterize, at the individual level, the
variance-covariance matrix of innovations to consumption, its
components and other variables. They considered joint modelling of
several components of consumption important, since the presence and
nature of common factors could provide information about the empirical
relevance of consumption-behaviour models. In particular, it allowed
different models of individual behaviour and of market interactions,
such as the existence of complete contingent markets, to be tested.
Given the lack of panel data, a distinguishing feature of the proposed
approach was the focus on big T-asymptotics, which were required for a
proper inference of the dynamic properties, rather than N-asymptotics. With
the life-cycle model in mind, the authors examined consumption in
relation to age, dividing the sample into cohorts of individuals that
were followed over time. They also considered educational and
occupational characteristics, and modelled the cross-sectional
heterogeneity among groups. The empirical results suggested that
consumption was highly sensitive to output, and that durable
consumption was much the most volatile component. Harald
Uhlig commented that this result was to be expected, given the
behavioural similarities between durable consumption and investment.
It was also unsurprising that the volatility of non-durable
consumption was almost invariant with respect to cyclical frequency,
whereas durable consumption fluctuated more at business-cycle
frequencies than in the short or long run. Fabio Canova (Universitat Pompeu Fabra and CEPR) presented his
paper, 'Testing for heterogeneities in the cross-sectional dimension
of a panel: a predictive density approach'. Recent theoretical
research on growth and development suggests the possibility of
'convergence clubs' emerging within groups of countries or regions.
This clustering may be induced by intra-group similarities in
preferences and technologies or in government policies. Hitherto,
however, there has been little formal empirical examination of the
existence of such a tendency. Canova thus proposed a general technique
for determining the number of such 'clubs' and the location of break
points in cross-sectional data. The test – within the Bayesian
tradition – allowed for heterogeneity within groups, and used
predicitive densities to estimate the hyperparameters of each club,
and posterior analysis to draw inferential conclusions about functions
of the model coefficients. No distributional assumptions were required
about the errors in the model, as long as the quasi-likelihood of the
data, conditional on the hyperparameters, could be computed. The
author applied the technique to both a simulated and an observed data
set, the former to establish the power and size properties of the
tests and the features of the estimated parameter distributions, and
the latter to test empirically the existence of convergence clubs
among European regions. The observed data comprised per capita incomes
for 144 European regions, measured relatively to the European average,
for the period 1980-92. Four clusters were identified, each
characterized by parameters controlling their speed of adjustment to
the steady state and the (relative) mean level of per capita
steady-state income. A high dispersion of steady states across each
group was found, and inter-cluster heterogeneity was confirmed by
differences in the long-run mobility indices across groups. Christophe Croux commented on the limitations both of using a
Bayesian-based predictive density, and of seeking an optimal solution
to the clustering problem via a technique in which it is
computationally impossible to consider all possible partitions of n
individuals into a given number of groups. He wondered whether similar
results could not be obtained using more traditional
ad hoc clustering techniques, which are model-free and independent
of prior beliefs about the clustering structure. Daniel Peña (Universidad Carlos III de Madrid) presented
'Forecasting with leading indicators by partial least squares',
written with Juan Antonio Gil
(Universidad Carlos III de Madrid). The use of a limited number of
indicators to summarize the responses of a large number of highly
correlated variables with respect to changes in a variable (or
variables) whose behaviour is to be predicted is well-established.
Examples include the business cycle 'diffusion indices' constructed by
Quah and Sargent, and, in Spain, the use of a synthetic leading index
to predict the inflation rate. Although principal components analysis
(PCA) and factor analysis (FA) are the most commonly used procedures
for such exercises, they do not exploit the relationship between the
explanatory variables and the variables to be predicted. Partial least
squares (PLS) estimation methods, by contrast, do exploit these
relationships and have been used extensively in Chemometrics. Calling
for greater use of PLS in econometrics, Gil and Peña argued that it
had major advantages over traditional regression methods. For example,
PLS allowed for the number of explanatory variables to be larger than
the number of observations, and offered a means for correcting for
possible strong multicollinearity between the regressands. Moreover,
the PLS estimate could be interpreted as a 'shrinkage' of the usual
least squares estimate. The
authors applied the algorithm to simulated dynamic data, and to the
forecasting of the Spanish inflation rate for the period February 1977
to August 1997. For the latter application, the method provided better
estimation and forecasting results than traditional time-series
analyses. Given its characteristics, however, Marc Hallin found it puzzling that PLS could lead to any
satisfactory results in the presence of highly-correlated or collinear
regressors. Its best selling-point might be the shrinkage argument,
although even then there were important qualifications. Enrique
Sentana (CEMFI and CEPR) noted that if the specification criteria
for the number of PLS factors did not converge, more factors would
have to be added to the model to capture the dynamics of the
regressor. 'A
review of systems cointegration tests' was written by Kirstin
Hubrich (Humboldt-Universität zu Berlin), Helmut Lütkepohl (Humboldt-Universität zu Berlin) and Pentti
Saikkonen (University of Helsinki), and was presented by Kirstin
Hubrich. Determination of the number of long-run equilibrium paths
linking the variables in a system is one of the primary motivations
for cointegration testing. Although a wide range of procedures is now
available, there is no consensus that any single method outperforms
all the others. The authors reviewed the systems cointegration
literature, and compared the various assumptions for the asymptotic
validity of the different tests within a general unifying framework,
presenting local power analyses, where available. A major contribution
of the paper, in their view, was that it placed simulations-based
comparisons of the size and power properties of the full range of
tests on a common footing, using a bivariate vector process. The paper
also reviewed systematically the differing assumptions regarding the
deterministic terms, and it considered the performance of tests that
do not depend on the values of the mean and the trend parameter in the
Data Generating Process (DGP). In the latter case, it was noted that
some newly suggested Lagrange multiplier-type tests performed
similarly to standard likelihood ratio tests in small samples, whereas
other tests were outperformed. Discussion focused on issues for
further research, including the set-up of simulations in tests for a
larger set of variables, and the robustness of results with respect to
the possibility of some cointegrated variables exhibiting structural
breaks. Shifts
in relations among economic variables over time – such as those
induced by regime changes – may introduce deterministic breaks in
data series which may be difficult to detect both by visual inspection
and by available statistical methods. Current practice with unit root
univariate processes, or with cointegrated error correction systems,
is to circumvent this problem by preliminarily fitting the data with
dummy variables. Studies of the effects of such procedures have
concluded that the inclusion of dummy variables, as well as the size
or the timing of the breaks, affects the critical values of the unit
root or cointegration tests. In 'Detrending procedures and
cointegration testing: ECM tests under structural breaks', Alvaro
Escribano (Universidad Carlos III de Madrid) and Miguel Arranz (Universidad de Alicante) followed a different
direction by examining robust procedures to test for unit roots in the
presence of structural breaks in an error correction mechanism (ECM)
context. Instead of including dummy variables in ECM models, the authors set out to approximate these breaks by extending the number of lags in the models, as determined by the Schwartz-Bayesian information criterion (SBIC). In doing so, they looked at the critical values, studied the size of the ECM test under different MA(1) errors and analysed the power of the test with Monte Carlo simulations. The robustness properties of the test were examined by applying the same procedure not to the observable variables but to their (unobserved) trend and cyclical components, obtained from appropriate filters. Three types of structural breaks were considered: full cobreaking, cobreaking in levels, and cobreaking in differences. In all cases, bootstrap methods were used to compute the tests' critical values. The simulation results showed that in ECM tests the critical values depend upon the type of break and other nuisance parameters, and that, in some special cases, the test with structural breaks may have large size distortions. The use of trend and cycle decomposition procedures improved robustness in terms of size and with respect to MA error processes. The authors also argued that test performance can be improved by augmenting the number of lags – a conclusion that was considered surprising by Massimiliano Marcellino (European University Institute), who thought this would lead to potential overspecification of the model and, consequently, more inefficiency. |