There exists a lively debate among scientists about evaluation methods. Some prefer peer review-based research assessments, while others think that bibliometric citation-based methods should be used as a verifiable mechanism for promotion and distribution of public research funds. Like peer reviews, but for other reasons, citations suffer from several problems. One of them is that they are related to the order in which editors arrange the sequence of papers in each issue of a journal. Research by Smart and Waldfogel (1996), Ayres and Vars (2000), Pinkowitz (2000), and Hudson (2007) finds that leading articles – those at the front of the journal – get more cites than others.1 This is tested by running regressions of the number of cites on the order in which the paper is placed and on some control variables.
Readers thus seem to believe that the editors of journals are smart enough to pick the ‘best’ paper ready for the coming issue and choose it as a leading paper. They also believe that the paper editors find to be the best is actually the best.
In recent work with co-authors (Coupé et al. 2010), I run an analysis that compares the number of cites conditional on ordering, in two types of publication strategies: random versus selectively ordered ranking of papers. The European Economic Review (EER) provides a natural experiment due to an editorial quirk.
Between 1975 and 1997, the initial of the first author’s surname was used to order papers in some issues; in others it was not so. As long as we are ready to accept that the alphabetical order is random, in the sense that on average it cannot help separate good and bad papers, this can be considered a natural experiment. This allows us to untangle whether leading papers are more cited because they lead or because they are of higher quality.
If in alphabetically ordered issues, leading papers also get more cites than others, then one can wonder whether editors really have a good guess at quality when they use their judgment in ordering. If this were the case, leading papers are more cited because they are leading (and readers expect them to be better) and not because they are of better quality.
To check for consistency, we also compare this with cites to papers in American Economic Review (AER), where, except by chance, the order is never alphabetical.
Our results show:
- Leading papers get marginally more cites in all three types of journals (EER alphabetically ordered, EER non-alphabetically ordered, and AER).
- As expected, the effect for AER is much larger than for EER.
But the difference in the mean number of cites between AER and EER papers is not very large (5 vs 2 cites). Moreover, for EER the difference in the marginal effect on citations of the first paper is not very different for alphabetical and non-alphabetical issues (1.9 v. 2.8), though a likelihood ratio test shows that the difference is statistically significantly different from zero.
This suggests that the lead article when editors exercise discretion is of better quality, but citation numbers overstate how much better it is. Based on the estimates, two thirds of the effect (1.9/2.8) is the result of going first, while one third only can be attributed to better quality. Note that while there is no difference between first and second paper in AER, for EER cites decrease after the first paper.2
Long papers are more cited than short ones, and notes are usually less cited (for AER the difference is quite large). The sequence of annual dummies that represent the year of publication, and thus the age of the paper in 2000, pick up coefficients that are declining in the case of AER: recent papers get less (cumulated) cites. The coefficients show no particular trend for EER. One possible reason may be that the natural decrease of cites for more recent papers is compensated by more cites due to increasing average quality of EER over time.
The ordering by the initial of the name may not be entirely random, since, in economics, names usually appear in alphabetical order. It is thus possible that lead papers in alphabetically ordered issues are more likely to be co-authored. To the extent that such papers get more cites, either because they are of better quality or because of more self-citations, the lead article effect may simply capture the influence of a larger number of co-authors. This was controlled by including the number of authors as a variable. Its effect is positive and highly significant in the case of discretionary ordering, both in EER and AER, but insignificant in alphabetical issues. More importantly, however, the inclusion of this variable, even when highly significant, did not change the sign and significance of the main variable of interest. This thus suggests that the estimated effect is purely a ‘lead article effect’.
Lessons for research evaluation
Objective methods such as those based on cites are not perfect and should therefore be used with care, or corrected since approximately two-thirds of the additional cites that leading papers get seem to be due to the effect of going first, while only one-third can be considered a genuine quality effect of the editors’ discretionary choice. Hence, given that most editors rank articles on the basis of their personal quality assessment, even objective citation counts have an important subjective component.
In addition, the fact that leading articles get more cites just because they are leading may be costly for young scientists, since well-established (and highly cited) scientists may get more cites than what they truly deserve: their reputation makes it more likely for their articles to be lead articles. This practice may result in intensifying the emergence of ‘superstars’, help conservatism and even crowd out some good articles by younger scientists who do not get properly cited.
The appearance of new electronic journals, as well as the fact that old-time paper journals become electronic, may induce changes in these patterns. Scientists are now becoming used to downloading individual papers and have, in general, no access to the issues of a journal (though the journal still exists, even if virtually, and papers are ordered in each issue). But the fact that paper copies do not lie on the desk of a scientist will certainly have an influence on citations in the future.
Author’s note: Our paper was published as a leading paper in issue 61(1) of Oxford Economic Papers. Did the editor want to make a joke? Or did he think it was better than the other papers published in the same issue? If I were you, I wouldn’t believe him.
Ayres, I and F Vars (2000), “Determinants of cites to articles in elite law reviews”, The Journal of Legal Studies, 29:427-450.
Coupé, Tom, Victor Ginsburgh, and Abdul Noury (2010), “Are leading papers of better quality? Evidence from a natural experiment”, Oxford Economic Papers, 61(1):1-11.
Cameron, AC and PK Trivedi (2005), Microeconometrics. Methods and Applications, Cambridge University Press.
Hudson, J (2007), “Be known by the company you keep: cites – quality or chance”, Scientometrics, 71:231-238.
Pinkowitz, L (2000), “Research dissemination and impact: Evidence from web site downloads”, manuscript, available on SSRN’s website.
Smart, S and J Waldfogel (1996), “A cite-based test for discrimination at economics and finance journals”, NBER Working Paper 5460.
1Hudson (2007) even finds that a highly cited paper in an issue has a positive impact on the cites of other papers in the same issue.
2It also worth mentioning that between 1985 and 1997, all AER papers that appear in order 1 to 8 get cites and only 2% and 4% of papers ordered 9th and 10th respectively, are not cited. This is far from being the case in EER, where some 15% of the papers never get cited, irrespective of their order.