Anglo/Finnish Workshop
Macroeconomics and Capital Markets

One of the goals of the Centre is to bring together its Research Fellows with their colleagues from other countries in order to exchange ideas and to foster collaborative research initiatives. Financial support from the Yrjo Jahnsson Foundation enable CEPR Research Fellows to meet academic and government economists from Finland at a two-day workshop last December at CEPR.

The workshop was opened by Seppo Honkapohja (Yrjo Jahnsson Foundation) with a paper entitled 'Speculation, Instability, and Government Policy', which described the progress of his research on the dynamic aspects of government policy, especially the relationship between the actions of the government and those of the private sector. Individuals will adjust the allocation of their wealth between public and private sector assets in response to current government policy. But since individuals must also plan for the future, their current portfolio choice will also depend on their expectations of future government policies. Since these policies are uncertain, agents in the private sector will assign different probabilities to each policy option: essentially, they speculate against different future policies. Honkapohja's research was designed to explore how such speculation will in turn affect current government policy.

In Honkapohja's main model, the government is assumed to hire capital goods from households, which it uses to provide free public services. Households allocate their stocks of capital goods between the public and private sectors, aiming to maximise their expected utility over their lifetime. They must therefore allow for a changing pattern of taxes and subsidies on their incomes over time. The rentals from the public sector are used by the private sector as money balances, holdings of which vary with total expenditure on goods produced by the private sector (since the public sector by assumption provides goods and services free of charge).

The allocation problems facing households and the policy problems facing the government in this model raise difficult technical issues, which Honkapohja could only solve using some simplifying assumptions. One of these was that uncertainty concerning future tax changes could be described by a Poisson probability distribution. This would be appropriate if the government only changed tax rates when its objectives moved outside a band of target values. Within this 'band', the costs of a change in policy would outweigh its benefits.

The participants recognised the technical problems raised in Honkapohja's analysis. Marcus Miller (Warwick and CEPR) suggested an alternative approach, in which the probability of falling outside the target band at any given time would depend on the position of the economy. Mike Wickens (Southampton and CEPR) wondered why a government should respond to a one-off shock, as it did in Honkapohja's analysis. Policy changes did not appear in practice to be random, and subsequent discussion considered allowing the probability of a change in policy to depend on the level of money holdings or the inflation rate. David Begg (Worcester College, Oxford, and CEPR) drew attention to the technical problems this approach would raise, and participants considered how these problems might be resolved.

Economic theory often implies constraints on the parameters of a set of behavioural equations, such as the homogeneity and symmetry constraints in demand systems. To obtain parameter estimates consistent with the underlying economic theory, these constraints should be imposed in estimation. Alternatively, one may want to test the constraints by estimating the model in both constrained and unconstrained form. There are, however, reasons why these constraints might not hold exactly. The data used in demand or cost studies is often aggregated and the constraints on the behaviour of an individual agent may be invalid at an aggregate level. Hence, testing the constraints or imposing them exactly may be inappropriate for aggregate data.

In addition, the optimizing behaviour on which the derivation of the parameter constraints is based may be imperfect. Such optimisation errors are a common justification for adding an error term in behaviorual equations. These arguments suggest that one might wish to treat the parameter constraints implied by economic theory as stochastic. Such an approach would also allow an assessment of the sensitivity of the results to imposition of the constraints.

Pekka Ilmakunnas (Research Institute of the Finnish Economy) explored this approach in his paper 'Stochastic Constraints on Cost Function Parameters: Mixed and Hierarchical Approaches'. He considered two ways of incorporating randomness in the estimation of cost functions (which are defined as the minimum cost to the firm of producing at a given output level). The first approach consisted of simply adding a random term to the constraints themselves; so-called 'mixed' estimation procedures were appropriate in this case. The second approach assumed that the constraints were satisfied exactly, but treated the parameters themselves as stochastic, giving rise to a 'hierarchical' approach. The choice between the two approaches will depend on how the randomness is assumed to arise: if the parameters are stochastic, the hierarchical approach is better; if not, the mixed approach is preferable.

Ilmakunnas applied these approaches to a model of structural change in Japanese industry and to Berndt and Wood's study of the demand for energy in American industry. He found that the results are changed markedly by introducing uncertainty into the constraints, showing the potential importance of the approach and giving a further illustration of the fragility of econometric results. He noted, however, that if the constraints or the parameters are not subject to errors, the use of either of the two methods will result in worse estimates.

This last point prompted Richard Blundell (University College, London, and CEPR) to emphasise the importance of testing for random parameters before estimation is undertaken: several such tests are available in econometric literature. Blundell also argued that these tests should be carried out against plausible and interesting alternative hypotheses and should not be treated as casual afterthoughts. Christopher Bliss (Nuffield College, Oxford, and CEPR) pointed out that the aggregate quarterly data used by Ilmakunnas might be inadequate to test hypotheses derived at the level of the individual agent. Begg and Wickens also drew attention to the need for an appropriate theoretical framework: factors such as habit formation and costly adjustment might also be important. In the study of technical change that Ilmakunnas discussed, a gradual learning process was probably in operation, and his methods may have attributed the effects of learning to errors of optimisation or aggregation.

In 'Monetary Stabilisation Policy in an Open Economy', Marcus Miller (Warwick and CEPR) analysed the design of monetary stabilisation policy when a floating exchange rate and sluggish domestic price adjustment cause the exchange rate and the real interest rate to respond to disturbances more quickly than does the rest of the economy. The objective of the government in Miller's model is to minimise the weighted sum of the deviations of output and of the inflation rate from their target values.

In a closed economy, the optimal anti-inflation policy involves reducing output; once inflation has been eliminated, output can be increased. Difficulties arise with such a policy in an open economy, however: a flexible exchange rate, through its effect on import prices, will influence the inflation rate. Arbitrage in capital markets means that the exchange rate will in turn depend on the expected differential between domestic interest rates and those in the rest of the world. Core inflation will therefore reflect sophisticated forward-looking behaviour, and policy- making is complicated by the effect of monetary policy on the exchange rate.

Miller examined three different policy rules. In the first, the policy makers choose the path for interest rates which is optimal at each moment in time. This policy may well be 'time- inconsistent': the interest rate that the policy maker announces today as the plan for some future time may not be optimal when that time arrives. Future interest rates may be different from those announced in plans today even though the economic circumstances which gave rise to the original plan remain unchanged. The second policy considered by Miller is one which is optimal, subject to the constraint that the government may not renege at a later date on its current announcements of future policies; this policy is termed 'time-consistent'. The third policy analysed by Miller is one in which interest rates are set with no consideration of their effect on the exchange rate.

In a simulation study, Miller found that the third, 'exchange- rate exogenous' policy performed very well, outperforming the second policy and nearly matching the first policy (in which policy makers do the best they can in every period). Indeed in the simulations he conducted, Miller detected a close resemblance between an optimal policy based on a simple feedback rule, in which policy instruments respond to the state of the economy, and the 'exchange-rate exogenous' policy, even though this policy rule is not based on feedback.

David Currie (Queen Mary College, London, and CEPR) observed that Miller's results implied that policy makers could make more gains from allowing for the openness of the economy than they could from being able to renege on their past commitments. Essentially this arises when exchange rate appreciation allows the 'export' of domestic inflation, but there is always the problem that other governments can adopt similar tactics. Perhaps coordination would be better in such circumstances. Currie also took issue with Miller's argument that many problems arise because policy- makers attach too little weight to the future implications of current policy. Although many problems would be resolved if governments took full account of the future, Currie found it unreasonable to assume they would do so. Matti Pohjola (University of Helsinki) was interested in the 'game' played between policy makers and private agents and wondered precisely how it should be modelled, given that policy makers seemed to derive an advantage from their ability to set policies and to influence private sector behaviour. There was some discussion of this question and of the technical problems involved in calculating the various optimal policies that Miller used.

Irving Fisher defined the real interest rate to be the nominal rate minus the expected rate of inflation. Other things being equal, he argued, the real rate should remain constant: it is the nominal rate which should adjust as expectations vary. In his paper, 'Inflation, Hedging and the Fisher Hypothesis', Matti Viren (Bank of Finland) examined the behaviour of real interest rates in various countries over the past 20 years. Viren used a proxy for expected inflation rates in his regression analysis, which was designed to assess how far the real rate has been constant. His approach allowed for a variety of random shocks to the economy and for uncertainty regarding inflation. Viren also sought to contrast the Fisher Hypothesis with a model of nominal interest rates based upon 'hedging' by investors against inflation. In this model interest rates contained a 'risk premium' to compensate investors against the risk of capital loss owing to inflation.

Viren constructed his proxy for expected inflation as an average of past actual rates, but this produced poor results in the regressions. The coefficient on expected inflation should be approximately one if Fisher's Hypothesis were correct, but Viren's estimates were much lower than this. Furthermore, the results did not appear reliable for any of the countries considered. The hedging hypothesis, on the other hand, seemed to produce much better results and may provide a better explanation of the behaviour of interest rates. But Viren's estimates for this model also seemed rather unstable and unreliable.

Discussion of Viren's paper centred around the interpretation of the Fisher Hypothesis, particularly the assumption 'all other things being equal'. Marcus Miller called attention to recent theories which attributed business cycle fluctuations to changes in patterns of intertemporal substitution, which were caused by variations in the real interest rate. Mervyn King (LSE and CEPR) suggested that any allowance for inflation hedging and capital risk should play close attention to the details of the tax system. Miller emphasised the importance of the distinction between unanticipated and anticipated inflation, particularly in short-run analysis.

The econometric problems involved in measuring expected inflation also aroused interest. Richard Blundell thought Viren's use of a backward-looking measure was a weakness. Mike Wickens suggested that the actual rate was probably the best measure of the expected inflation rate and that it should be used in the estimation process. David Begg was concerned that Viren's estimates used the level of output and of the money supply as explanatory variables, without allowing for the fact that their values were determined jointly with the inflation rate. He thought it important to explain inflation using variables that were not themselves functions of inflation.

In 'Efficient Equilibrium in a Differential Game of Capitalism', Matti Pohjola (University of Helsinki) sought to apply the methods of game theory to investigate the potential efficiency of the basic capitalist system. Workers and capitalists are the 'players' in Pohjola's game, and each group tries to maximise their own welfare. Pohjola assumes that workers determine wages and hence the distribution of income. Capitalists control investment and hence the size of income. Under these circumstances both groups can increase their own welfare by cooperating with the other class. Without cooperation an 'inefficient' equilibrium can result, since each class has an incentive to exploit the other class.

Pohjola extends the model so that the game is played repeatedly, and players can remember the past. Then an efficient outcome can be reached: each player abides by the cooperative agreement, so long as neither player has cheated in the past. Each player realizes the costs of non-cooperation, so there is no incentive to deviate from the cooperative rule. This shows that changing the information available to the players will greatly change the outcome of the game, Pohjola observed.

These results suggest that 'inefficiencies' such as unemployment and slow growth are not intrinsic to capitalism. Pohjola conceded that the structure of capitalism in his analysis was somewhat idealised. The 'classes' in Pohjola's model act as homogeneous groups, and if inefficiency exists, it could be ascribed to one player's lack of knowledge of the 'game'. Both 'players' or classes have perfect information and are equally concerned about welfare in the future and in the present.

Discussion of Pohjola's paper focussed on the technical problems involved in modelling a game that lasts over many periods. Mark Salmon (Warwick and CEPR) suggested that it should be possible to find a variable that acted as a 'sufficient statistic' for the game, incorporating all the relevant information concerning the past history of the game. Using such a variable, Pohjola's repeated game could be analysed as if it were a much simpler one- period game. Participants agreed that game theory, when used properly, could usefully illuminate many aspects of macroeconomics.

There is considerable debate as to whether the goal of full employment is best pursued by means of tax cuts or increased government expenditure. Neil Rankin (Queen Mary College, London, and CEPR) used a 'disequilibrium' framework to analyse this question in his paper 'Taxation vs Spending as the Fiscal Instrument for Demand Management: A Disequilibrium Welfare Approach' (available as CEPR Discussion Paper No. 84). Rankin based his analysis on a model that allows supply and demand to differ in the goods and labour markets, although the money market is assumed to be in equilibrium.

The results of such an analysis are sensitive to the nature of government expenditure, and in order to assess this sensitivity, Rankin considered three cases, in which government spending is treated as 'waste', a consumption good, or an investment good. For each of these cases he calculated the 'optimal' policy, namely that which produces full employment and the greatest increase in the utility of consumers in the economy.
In Rankin's model increased government spending will increase welfare. The optimal policy, however, will be to use tax cuts to achieve full employment and government expenditure to achieve distributive goals. When he assumed that the government is obliged to balance its budget, however, matching expenditure to income, Rankin found that there are some grounds for using government spending to achieve full employment.

It is widely believed that the UK is at present experiencing 'Keynesian' unemployment, in which supply exceeds demand in both the goods and labour markets. Rankin found that if unemployment was Keynesian, increased government expenditure will increase welfare in all three cases.

Members of the workshop agreed with Rankin's approach but stressed the need to analyse the institutional and political influences on government spending and to incorporate monetary policy within the model.

The 'Ricardian Equivalence Theorem' has been the source of considerable controversy in recent years. According to this theorem, the choice between bond- and tax-financed government spending does not matter, since an informed private sector realises that taxes will have to be levied at some time to repay the increased expenditure and service the bond issue, and it saves now to anticipate higher taxes in the future. This means that if a government chooses to finance spending by issuing bonds rather than by levying taxes, the bonds will not be treated as net wealth by the private sector.

In 'Tax Cuts, Risk Sharing and Capital Market Implications', Erkki Koskela (University of Helsinki) examined the impact of taxes on private sector spending decisions. Koskela argued that the Ricardian argument depended on a specialised and unrealistic set of assumptions, and that under more plausible assumptions, the Ricardian equivalence will fail to hold and the timing of taxes will have a Keynesian effect. Koskela emphasised four difficulties with the Ricardian argument. The first is uncertainty surrounding future income, which tends to reduce current consumption. A reduction in current taxes will increase current income, but disposable income must fall in the future to repay future taxes. This expectation of increased taxation serves to decrease uncertainty surrounding future income, so a shift from taxes to bonds will cause current consumption to rise. Secondly, Koskela considered the effect of distortionary taxes. A shift in these taxes from the present to the future will have an ambiguous effect on current consumption. Koskela also found that the Ricardian equivalence also collapses if there is uncertainty about the future course of wages. Finally, he allowed for the effect of credit market imperfections such as credit rationing and differing rates for borrowing and lending. Such imperfections also imply that the choice of taxes or bonds will have real effects on the economy.

Matti Viren commented that the paper dovetailed neatly with other recent work, most of which also tends to reject Ricardian equivalence. Since a proportion of future taxes may be levied beyond the lifetime of individuals in the private sector, it is important to allow for bequests, and this casts further doubt on the theory. Colin Mayer (St Anne's College, Oxford, and CEPR) suggested that the dates when taxes are levied might affect total tax revenue, because of the resultant differences in patterns of consumption. If so, it could be difficult to foresee future tax levels. Ailsa Roell (LSE) commented on other recent work, especially by King, that gave plausible theoretical reasons for a divergence between borrowing and lending rates.

In 'The Implication of State and Local Taxes on Corporate Policy', Vesa Kanniainen (University of Helsinki) discussed a puzzling feature of the Finnish tax system. It would appear that firms do not claim all the corporate tax allowances they are permitted. Why is this? Kanniainen noted that Finnish companies in fact face both national taxes such as the profits tax, and local taxes which are based on the size of the firm. There are also various tax allowances designed to increase investment, and the firm must choose which ones it wishes to claim. Reported profits may not match actual profits, although dividends must be paid from the reported figure. Kanniainen argued that far from being irrational, the failure to claim tax allowances represents optimal behaviour, designed to reduce the impact of local taxes. Kanniainen used a formal model to demonstrate this, and he then used a dynamic model to examine whether this behaviour was also optimal over time.

The workshop generally agreed that Kanniainen's explanation of the paradox was probably correct, but Colin Mayer wondered whether the analysis could accommodate the effects of corporate finance and credit markets.

Colin Mayer (St Anne's College, Oxford, and CEPR) presented the final paper of the workshop, entitled 'Company Expectations and New Information: An Application of Kalman Filtering' (available as Discussion Paper No. 62). This joint work with Matthias Mors attempted to model how UK companies formed their expectations concerning variables such as sales and employment. Mayer and Mors took qualitative data from CBI surveys giving the proportion of respondents who indicate that they expect these variables to 'increase', 'decrease' or 'remain the same', and he converted these data into a form which could be compared to the realised values of the variables.

The 'Kalman filter' was designed by engineers as a statistical device to combine uncertain observations on a variable with theoretical information, to provide the best estimate of the actual state of the variable. Mayer used this technique to develop simple models relating firms' expectations to the past information available to them.

Expectations of orders and sales are difficult to model, yet even the simplest statistical models constructed by Mayer tracked the actual values of orders and sales considerably better than the data on companies' own expectations. This suggested that firms do not use even the simplest forecasting rules, although another possible explanation is that inter-firm effects are not allowed for in the CBI data. Company expectations concerning employment were much closer to the actual history of employment, however. Mayer suggested that this was because firms effectively control their own employment levels and therefore are reporting their plans rather than their expectations.

Mayer speculated that the poor performance of the expectations model for orders and sales might be due to the fact that forecasting procedures vary across firms. This had lead Mayer and Mors to develop a model that allowed the expectations data to be generated by a mixture of different forecasting methods, and this approach proved much more successful.

Timo Terasvirta (University of Helsinki) suggested that the expectations data were difficult to model because they reflected the views of a variety of firms, each subject to different production lags. Did the results indicate that firms are pessimistic in their forecasts, Terasvirta wondered? Mayer explained that the method of converting the qualitative expectations data into quantitative form had assumed that on average expectations were correct. He also speculated that the results showed that firms place too much weight on the theoretical model and too little on the observed data.

The workshop stimulated lively interchanges among the participants, and it is hoped to follow up soon with further collaborative activities, initially in applications of game theory and disequilibrium macroeconomics and econometrics.