VoxEU Column Frontiers of economic research

Measuring success in economics

Academic economists, like any other group of professionals, are extremely competitive and concerned with measuring their own success. This column argues that that one cannot rank individual scholars’ achievements by the traditional summary measures, such as where their research is published or the institution with which they are affiliated. Properly judging success in economics requires paying attention to individual outcomes, not to aggregates that are poor signals of the individual results of which they are comprised.

Like any other group of professionals, academic economists are extremely competitive and are very concerned with measuring their success—with professional navel-gazing. They are focused, and in a few cases even obsessed, with measuring the impact of their work on scholarship, on the wider world and on their standing compared to others in their profession.1 Rankings and measures of success serve a broader purpose than mere within-group competition—they indicate the extent to which a scholar’s achievements as a researcher affect other scholars and thus in the end affect public debate and public policy. The issue they face is how to compare achievements, and for that reason it is particularly important to have some agreed-upon measures of individual and group achievement.

Most academic economists (and other researchers) judge their own and their peers’ achievements by numbers of publications, with special emphasis given to publications in journals that are considered to be more prestigious. The reason is simple: these signals of achievement require very little effort in gathering information and necessitate almost no thought. While a select group of economics journals is commonly referred to as the “Top Five,” judging quality based simply on where an article appears is an extremely poor way of assessing quality.2 At the very least the extent to which journals affect other research—the citations received by articles in a journal—merits attention. A tree may fall in a very popular forest, but if nobody hears its fall, does it matter?

Within this select group of journals there is tremendous heterogeneity among published articles. Taking each article appearing in these journals in 2007 and 2008, Figure 1 graphs the distribution of the number of citations received from its publication date up to January 2015, as measured in the Web of Science.3 While the average paper had been cited 50 times, the median paper had received only 35 citations by scholars and other authors who might read the journal. Moreover, only 21% had received more than 100 citations. The prestige of the top journals in economics arises primarily because a few of the articles they publish attract very wide attention. While others are not entirely ignored, their impact is small.4

Figure 1. Distribution of Web of Science citations to Top Five publications, 2007-08

Much scholarly research published in other journals has an impact that exceeds that of research published in the most prestigious outlets. Taking the two general economics journals that are viewed as the next steps down the publishing “food chain” in economics, Figure 2 graphs where citations to articles in these lesser outlets would place them had they been published in one of the Top Five outlets.5 While it is true that the majority of these articles attract less attention than articles in the top journals, even the median-cited article in these “lesser” journals receives more attention than 30% of the publications in the top journals. Academic output is heterogeneous, so in judging it one must consider individual cases and not simply evaluate research based on where it is published.

Figure 2. Centile of citations to articles in the Economic Journal and Review of Economics and Statistics in the distribution of citations to Top Five publications, 2007-08

Research in economics is increasingly characterised by co-authorship—today the overwhelming majority of scholarly articles have two or more authors, with substantial fractions having three or more (see Hamermesh 2013). In judging individuals’ contributions to research, how can potential employers, students and outsiders evaluate the contributions to what are typically the results of joint production? This is especially true when senior contributors, whose reputation is established, have incentives to hype the contributions of their junior co-authors who may be seeking jobs or promotions.

There is no way to know the ex ante impact of a scholarly article, but we can relate the subsequent citations received by an economics article to the number of its authors. This production function for scholarly impact indicates the average attention received by research distinguished by the number of researchers who produce it, and it implicitly measures the marginal scholarly productivity of an additional co-author.

Figure 3 presents the average number of subsequent Web of Science citations received by articles published in the “Top Five” journals in 2007 and 2008 as a function of the number of authors. Additional co-authors are indeed on average productive of additional citations, but the marginal effect on scholarly impact of adding another co-author is far less than proportional. Going from sole- to two-authored works increases citations by barely 20%, and going from one to four authors does not even double the scholarly impact of an article. The marginal productivity of an additional co-author is, as economic theory would suggest, diminishing in the number of co-authors.

Figure 3. Mean Web of Science Citations to Top Five publications, 2007-08, by number of authors

The diminishing marginal productivity of additional co-authors suggests that in evaluating the success of economics researchers one cannot simply count articles, or articles adjusted by journal quality, or even, as would be better, the citations an individual article receives. In judging newly published research one must reduce an estimate of the measured impact of each individual author by dividing perhaps by the number of authors, N, but at the very least by some number greater than 1. At the end of the day, many years after publication, when the eventual scholarly impact of research has been nearly fully revealed, dividing credit by the number of authors seems to be the only sensible approach to measuring an individual’s contributions and success.

The degree of heterogeneity in the achievements of individual economists is also huge. Even taking elite senior scholars—those who received their doctorates over 15 years ago and who are located in schools ranked in the top 20 in the US—a superstar (i.e. a scholar who is among the top 1% of researchers in this group) is cited 14 times as frequently as the median scholar in the group; and the median scholar is cited 6 times as often as someone in the lowest decile of researchers ranked by citations in this elite group. A few individuals account for the overwhelming attention paid to this group: 15% of these economists account for half of all the citations the group receives. There are a few superstar economists, more stars, even more planets, but the majority are scholarly asteroids.

In addition to judging individuals’ success, we also judge the achievements of groups of economic researchers—i.e. the faculties with which individuals are affiliated—as these judgments are important for students considering post-graduate study, young researchers seeking employment and outsiders seeking experts on economic issues. A wide variety of measures, based on publications or citations, looking at averages, medians, or the achievements of the best researchers at an institution yield remarkably similar rankings, with Harvard, Princeton, Chicago, Stanford, MIT and Berkeley being consistently ranked among the top ten economics faculties in the US. Top economics faculties in Europe generally rank well below this group by these usual measures.6

Here too, though, paying attention to heterogeneity—considering the achievements of individual faculty members—is at least as important as looking at summary statistics. At least 25% of the researchers in economics faculties that are ranked 6th to 10th have records that place them above the median-cited scholar in the at least one economics faculty that is ranked 1st to 5th, and researchers in schools ranked on average in the second 10 have records that would place them above the median-cited scholar in at least one of the faculties ranked in the top 10. The same is true for the very best European economics faculties. There is tremendous overlap in quality among elite institutions, so that judging faculties based on average achievements misses most of the variation in individual achievements.

The central message from examining the achievements of economists is that one cannot rank individual scholars’ achievements—their careers or their individual research contributions—by summary measures, such as where the research is published or the institution with which a scholar is affiliated. Top schools have some mediocre scholars, while lower-ranked schools have some stars. Top journals publish more of the very best scholarly research than other journals, but they also publish a lot of research that is mostly ignored. Properly judging success in economics requires paying attention to individual outcomes, not to aggregates that are poor signals of the individual results of which they are comprised.

Author’s note: This column is based on NBER Working Paper No. 21754, “Citations in Economics: Measurement, Uses and Impacts”.

References

Cartter, A. (1966), An Assessment of Quality in Graduate Education, Washington, DC: American Council for Education.

Davis, P. and G. Papanek (1984), “Faculty Ratings of Major Economics Departments by Citations”, American Economic Review 74, pp. 225-250.

Dusansky, R. and C. Vernon (1998), “Rankings of U.S. Economics Departments”, Journal of Economic Perspectives 12, pp. 157-170.

Ellison, G. (2013), “How Does the Market Use Citation Data? The Hirsch Index in Economics”, American Economic Journal: Applied Economics 5, pp. 63-90.

Hamermesh, D. (2013), “Six Decades of Top Economics Publishing”, Journal of Economic Literature 51, pp. 162-172.

Endnotes

[1] Numerous rankings of individuals and groups of economists have been produced for over four decades, with a few the examples being Davis and Papanek (1984), Dusansky and Vernon (1998) and Ellison (2013).

[2] These are the American Economic Review, Econometrica, Journal of Political Economy, Quarterly Journal of Economics and Review of Economic Studies.

[3] These and subsequent results in this article are essentially unchanged if one uses the other main source of citations to scholarly articles, Google Scholar.

[4] The same conclusion is produced if we take a longer perspective and examine subsequent citations to articles published in these journals in 1974 and 1975.

[5] These are the Economic Journal and the Review of Economics and Statistics.

[6] These six schools ranked in the top seven in 1964 in the first comprehensive ranking of economics programmes in the US; see Cartter (1966).

5,459 Reads