There are many uncertainties associated with publishing in economics. A paper must be written, find sympathetic referees who do not demand impossible revisions, be somehow (and often iteratively) revised to the referees’ satisfaction, be deemed acceptable by the editor, and finally published. At each stage, the process could end abruptly. Authors have horror stories about unfair and untimely rejections and endless rounds of revisions. Editors have their own horror stories about referees and authors from hell with whom they have had to deal. Publication lags have made it increasingly difficult to publish in time for one’s tenure review.
Is there really a problem? What within the journals is behind this? How can journals do better? Using (not publicly available) data on all submissions to the Journal of International Economics (JIE), we are able to address a number of questions that previous studies of publishing in economics could not. For example, we are able to see whether it matters which editor handles the paper (i.e. test for editorial heterogeneity in standards) after controlling as best as possible for the quality of the paper itself. We can also ask whether the treatment of papers is even handed (i.e. whether being well-known or well-connected helps publish), about the extent of type 1 (rejecting a good paper) and type 2 (accepting a bad one) errors at the journal, and speculate on the likely effects of a policy of desk rejecting a significant fraction of papers after a cursory review.
Examinations of publishing in economics typically study two kinds of question: those related to the determinants of the time it takes to publish and those related to evidence of bias in acceptances. Coe and Weinstock (1967), Yohe (1980), and Trivedi (1993) find that there has been a significant slowdown in the publication process in recent years. Bowen and Sundem (1982) show that a lion’s share of accepted papers went through one or more revisions, while most of rejected papers were rejected in the first round. Ellison (2002a), using data on published papers only, finds no statistically significant relationship between the submit-accept time and the authors’ standing in the profession, as reflected in publications in top journals. He argues (Ellison 2002b) that while greater length and number of co-authors might account for a small part of the increase in the delay, greater emphasis placed on aspects of quality other than the main idea may well account for up to a quarter of the increase in the delay.
The second direction taken in this literature has been to test for bias in acceptances and rejections based, for example, on gender, closeness to editors or co-editors of the journal, the ranking of the author’s place of work (Blank 1991; Laband and Piette 1994) or the author’s professional age (Hamermesh and Oster 1998).
However, the lack of access to data on the inner workings of a journal has constrained the kinds of questions that could be asked. Most previous work in this area, with a few rare exceptions, is based on data on published articles from one or more journals or small random samples obtained from the editors of these journals. Not having data on all articles submitted limits the set of questions that can be asked (for example, one cannot look at differences in accepted and rejected papers).
We compiled a unique data set on all submissions to the Journal of International Economics over a decade to tackle these issues. Our more direct evaluation of a journal’s performance and its co-editors may help guide all the parties involved in the process.
From 1995 to 2004, the JIE received 3032 submissions of which almost 600 articles (20%) were accepted for publication. Despite the journal pages doubling (rising from 700 to 1400 pages per year) over that time, the acceptance rate nearly halved from 27% to 14%! Most accepted papers go through at least one revision: only .6% of submitted papers were accepted with no revision, about 23% were sent for revision, and about 78% of these were finally accepted. The absence of a desk rejection policy makes the difference in the time to first decision for papers that are not rejected in the first round (162 days) and those that are so rejected (132 days) relatively small.
For each submission between 1995 and 2004, we observe the authors’ names, the title of the paper, the date of submission, the name of the co-editor who handled the article, the date of the first decision and subsequent decisions (if any), and the decisions. In addition, we collected detailed data on the authors’ background from their curriculum vitaes and data on the final outcomes with each submission such as its ultimate fate as well as its reception by the profession (citations).
There are many parties involved in the publication process who can gain from a better understanding of the inside workings of this process. Since editors are, hopefully, interested in publishing good papers and rejecting bad ones, some indication of the extent of type 1 and type 2 errors should be very useful. The JIE has high type 2 error (7% of published papers have no citations at all) and low type 1 error: very few papers rejected by the JIE are accepted at better-ranked journals. Only 564 of 2434 rejected papers were published and of these 564, only 14% were published in places ranked above the JIE, and even these are cited roughly half as often as all JIE papers on average. This suggests that higher standards might be relatively costless. Our work also suggests that there is editorial heterogeneity in standards and that editors with worse papers allocated to them tend to be more lenient, which is consistent with their under-estimating the average quality of submissions to the JIE. Providing editors feedback on how their standards seem to rank relative to other editors could help reduce heterogeneity in editorial standards.
There is also some evidence that there are entry barriers in the publication process: for example, being well-connected (having published in network journals, where submissions are solicited) is very significantly positive in all specifications. This could be innocuous: better academics are better connected and write better papers, so their work is accepted more readily. However, if this were so, then such papers should also be cited more often and controlling for citations should negate this effect. While this is partly so in the data (publications in network journals go from being highly significant to being less so and the size coefficient falls), the effect remains, suggesting more work with better controls for quality is warranted.
Finally, we look at the determinants of acceptance using a probit model, as well as a variation (using maximum likelihood techniques) that controls for selection bias in the data arising from data not being available for authors without a web presence. We find that our model predicts acceptance (based on author characteristics alone) very well. Just running this regression out of sample and rejecting the papers whose probability of acceptance is in the bottom 25-33% results in almost no ”wrong” rejections. In fact, for single-authored papers, we could reject the lowest 40% of the papers and make no mistakes. One would expect that by taking a quick look at the paper, the editor could do much better than such a mechanical approach! This suggests that desk rejection of a good share of papers would be possible at low cost.
Overall the Journal of International Economics seems to be doing a good job identifying quality, although tighter standards, feedback to co-editors on their performance, and a desk rejection policy would likely improve efficiency. Refining the set of the papers being sent to referees, in particular, would reduce the burden on all concerned at almost no cost in terms of performance. Moreover, providing feedback to co-editors and possibly referees on their relative performance and information on the quality composition of the papers they receive relative to the average could help to reduce heterogeneity in terms of standards. Whether such policy recommendations can be applied to other economic journals requires more analysis, which requires more data.
Blank, Rebecca M. 1991. “The Effects of Double-Blind versus Single-Blind Reviewing: Experimental Evidence from the American Economic Review.” American Economic Review, 81(5): 1041-1067.
Bowen, Robert M., and Gary L. Sundem. 1982. “Editorial and Publication Lags in the Accounting and Finance Literature.” The Accounting Review, 57(4): 778-784.
Cherkashin, Ivan, Svetlana Demidova, Susumu Imai, and Kala Krishna. 2008. “The Inside Scoop: Acceptance and Rejection at the Journal of International Economics.” NBER Working Paper No. 13957.
Coe, Robert K., and Irwin Weinstock. 1967. “Editorial Policies of Major Economic Journals.” Quarterly Review of Economics and Business, 7(4): 37-43.
Ellison, Glenn. 2002a. “The Slowdown of the Economics Publishing Process.” Journal of Political Economy, 110(5): 947-993.
Ellison, Glenn. 2002b. “Evolving Standards for Academic Publishing: A q-r Theory.” Journal of Political Economy, 110(5): 994-1034.
Hamermesh, Daniel S., and Sharon S. Oster. 1998. “Aging and Productivity among Economists.” The Review of Economics and Statistics, 80(1): 154-156.
Laband, David N., and Michael J. Piette. 1994. “Favoritism versus Search for Good Papers: Empirical Evidence Regarding the Behavior of Journal Editors.” Journal of Political Economy, 102(1): 193204.
Trivedi, Pravin K. 1993. “An Analysis of Publication Lags in Econometrics.” Journal of Applied Econometrics, 8(1): 93-100.
Yohe, Cary W. 1980. “Current Publication Lags in Economics Journals.” Journal of Economic Literature, 18(3): 1050-1055.