The McNamara fallacy in financial policymaking
VoxEU Blog/Review Financial Regulation and Banking

The McNamara fallacy in financial policymaking

The excessive build-up of risk before 2007 was missed in spite of all the numbers being in front of us. Jon Danielsson explains how financial policymakers have fallen for the 'McNamara fallacy' in many aspects by solely relying on what can be measured and quantified, preferring to regulate by models and focusing on perceived risk and not actual risk.

First posted on: 

modelsandrisk.org

 

One of the puzzling things about post-crisis financial policymaking is the understanding that we missed the excessive build-up of risk before 2007 in spite of having all the numbers right in front of us, and at the same time founding the new world order on numbers and measurements. Have the policymakers fallen for the 'McNamara fallacy'?

The McNamara fallacy is named after Robert McNamara, the president of Ford Motor Company and United States defense secretary during the early years of the Vietnam War. His strategy for fighting the war was based on what he developed at Ford, where everything was measured and quantified.

When it came to fight the Vietnam War, he argued that “Things you can count, you ought to count”, including body count. The problem was that the authorities only measured what could be measured. Hence they thought they were winning the war, which they were, but only on paper.

McNamara explained his philosophy in 1967:

It is true enough that not every conceivable complex human situation can be fully reduced to the lines on a graph, or to percentage points on a chart, or to figures on a balance sheet, but all reality can be reasoned about. And not to quantify what can be quantified is only to be content with something less than the full range of reason.”

The problem with this approach was well demonstrated by the sociologist Daniel Yankelovitch in 1972, who called it the “McNamara fallacy”:

The first step is to measure whatever can be easily measured. This is OK as far as it goes. The second step is to disregard that which can’t be easily measured or to give it an arbitrary quantitative value. This is artificial and misleading. The third step is to presume that what can’t be measured easily really isn’t important. This is blindness. The fourth step is to say that what can’t be easily measured really doesn’t exist. This is suicide.”

OK, then, have the financial policymakers today fallen for the McNamara fallacy?

In many aspects yes. They prefer to regulate by models, focusing on perceived risk and not actual risk.

Inevitably in policymaking, subjectivity is often seen as bad and objectivity good. If one is objective, one has to have something objective to measure and control. And that requires numbers and statistical models.

Financial policymakers may have fallen for the McNamara fallacy.

Take bank capital. There are of course a myriad of ways one can determine bank capital, such as the Basel risk-weighted assets and the leverage ratio, to name just two. Each of these have a lot of problems, not the least because of their dependence on measurements, and how the regulated entity interacts with the measurements.

A much simpler, and quite possibly better way, to set capital would be for the authorities to predetermine the amount of capital a bank has to hold, using their subjective judgments.

But of course, that can never happen. We need scientific objectivity, and are stuck with the McNamara’s fallacy.