The world of academia has changed over the last forty years. In those far-off days university lecturers might write a paper every few years, and this served to sustain their position and reputation. Now, every eight years or so, academics and universities in the UK are subject to an evaluation of their work. This is just one of a number of similar exercises across the world (Abramo et al. 2013). Hence, increasingly, and everywhere, the pressures are on to publish.
The impact of research on the economy
In the UK the current vintage of this is known as the Research Excellence Framework (REF), and the submissions will go in later this year.1 They will include, for the first time, an emphasis on the economic and social impact that research makes. This shift reflects the mood of the financially challenging times we live in, where research funders feel the need to demonstrate that research contributes positively to the economy and society.
Each REF submission is required to submit impact case studies. In reality, these are not thorough quantitative measures of research costs and benefits, but are basically impact stories. However imperfect this methodology is, it is challenging academics to think about their research in new ways. In contrast, traditionally they have focused on publishing their research, and then sitting back, waiting and hoping for the plaudits, i.e. citations, to arrive.
Despite this shift in emphasis, an evaluation of published research still plays the key role in the REF. It involves the grading of up to four publications from each academic being entered. It is important for both universities and individual academics. A degree of money is involved – in the form of government funding – but the reputational aspect is as important.
The REF is unpopular with many academics and has come in for a great deal of criticism (Smith et al. 2011).2 It does indeed have many weaknesses, for example, in encouraging strategic behaviour. In addition, it fails to take account of the increasing tendency in much of academia for people to work together (Hudson 1996), with in general a multi-authored paper counting for just as much as one which is sole-authored.
But despite its faults, by and large, I am in favour of it, not least because it provides a more solid platform on which an individual’s worth becomes apparent to the university they work for. However, every process like this distorts incentives, and thus – has some undesirable consequences. In the case of the REF, these consequences affect individuals, their disciplines, and the wider world.
Ranking individual papers
In order to evaluate the submitted research, there is a panel of experts. For economics, there are 18 members on the panel. Their job is to rate each piece of work from 1* to 4*. In the last such exercise there were almost 700 academics to evaluate (Lee et al. 2013).
But how to rank research? How can a relatively small group evaluate such a large amount of work in a short space of time? The widely held belief is that, despite claims to the contrary, in economics at least the quality of the paper is primarily judged on the basis of the quality of the journal where it is published (Lee et a.l 2013).3 Whether this is or is not the case, it is certainly true that many universities are using journal quality as the basis for their submissions.
Economists tend to like order and certainty, and they tend to perceive a paper in a particular journal as definitely 4*, but in another only 2*. Of course, it is not quite that clear-cut, and there is disagreement – sometimes substantial – at the margins. To provide clarity, there are a number of lists, such as the Keele list, and the Association of Business Schools (ABS) list. One of the most extensive such lists, covering all disciplines, was done in Australia (Abelson 2009). Of course, use of such lists by the panels is against stated policy of the Higher education funding council for England (HEFCE). But how to stop panel members being influenced by publications in journals rated as 4*? Moreover, it would be a brave, or foolhardy, panel to subsequently rank those papers as 2*.
All the above lists contain an element of subjectivity. The main contribution of my recent paper in the Economic Journal (Hudson 2013) is to link these lists to a set of metrics, mostly based on the number of citations papers in a journal get, and then use these metrics to produce up-to-date lists minus the subjective element.
The resulting lists contain journals which can confidently be called 4*, and others which can confidently be called 3*. But two journals can be very close together on the metrics, and to put one definitively in a specific category, and another in a different category seems slightly arbitrary. Hence, the paper also classes journals as, for example, ‘probable 4*’ and ‘possible 4*. For papers in these journals the REF panel, and anyone else seeking to make a judgement on the work, had better read the paper and form a judgement, particularly if it is a new publication with citations thin on the ground.
The analysis shows that no single metric adequately captures economists’ perceptions of journal quality, although in combination they do better. Therefore, if metrics are to play a greater role in future evaluations, as some suggest should be the case, they would best be used in combination.
As I said earlier, the ABS list is being widely used in economics departments, as well as business departments, to determine their REF entry. But there are problems in this. My analysis indicates that the metrics which influence the ABS list are different to those that impact on the Keele and other lists compiled for economists.
Economists tend to value a focus on economics. A journal whose articles tend to be widely cited in journals outside economics will tend not to be as highly rated as a similarly cited journal with a greater economics’ focus. This is not the case with the ABS list, where the analysis suggests that an economics’ focus is actually penalised.
Thus, the widespread use of the ABS list in economics departments, particularly as a basis to inform their REF decisions, is distorting incentives, pushing economists to work in areas – such as finance and management, which the ABS does value – and away from the many other areas economists work in. This is not a criticism of the ABS list as such, it is after all designed for use in business schools, but the problem lies in the way it is being used. In passing we note that this may also be affecting the hiring and promotion strategies of some departments and hence, have a distortionary impact on the whole profession.
A reverence for the old
What else impresses economists in a journal? They tend to particularly value theory journals and to revere age, i.e. very old journals tend to be rated more highly than the metrics might strictly suggest, which may well be a bias towards what we have grown up with (Serenko and Bontis 2011). Hence, new journals face a tough time in getting established, unless backed by a major organisation, such as the American Economic Association. The main exception to this would be some of the electronic journals published by Berkeley.
The emphasis that economists put on their own discipline is not unusual, and in many ways is justifiable. After all, if you want to know which is the best economics journal it is presumably the one that has most citations in other economics journals. Similarly, when judging someone as an economist, you will rate more highly their publications in a 4* economics journal than a 4* sociology journal.
This mild case of myopia is probably true of many other disciplines as well. But there are many issues in the world which can benefit from the toolkit and perspective of good economists, such as the environment and governance. Yet the incentives, and not just those linked to the REF, deter the economist from publishing in journals in these areas, as well as deter top economics journals from publishing outside the core. That, in my view, is a pity.
Abelson, P (2009), “The Ranking of Economics Journals by the Economic Society of Australia”, Economic Papers 28(2): 176–80.
Abramo, G, Cicero, T and D’Angelo, C A (2013), “National Peer-Review Research Assessment Exercises for the Hard Sciences Can be a Complete Waste of Money: The Italian Case”, Scientometrics 95/1: 311-24.
Hudson, J (1996), “Trends in multi-authored papers in economics”, The Journal of Economic Perspectives 10(3): 153-58.
Hudson, J (2013), “Ranking Journals”, The Economic Journal 123(570): F202-22.
Lee, F S, Pham, X and Gu, G (2013), “The UK Research Assessment Exercise and the narrowing of UK economics”, Cambridge Journal of Economics 37: 693–717.
Serenko, A and Bontis, N (2011), “What's Familiar is Excellent: The Impact of Exposure Effect on Perceived Journal Quality”, Journal of Informetrics 5(1): 219-23.
Smith, S, Ward, V, and House, A (2011), “Impact in the Proposals for the UK's Research Excellence Framework: Shifting the Boundaries of Academic Autonomy”, Research Policy 40/10: 1369-79.
1 Details on the REF can be found on Assessment framework and guidance on submissions (http://www.ref.ac.uk/media/ref/content/pub/assessmentframeworkandguidanceonsubmissions/02_11.pdf)
2 The amount of literature on the REF itself is quite limited, but this is supplemented by a lot of information on various blogs, including by the London School of Economics and Political Science and by the British Medical Journal. See http://blogs.bmj.com/bmj/2013/05/07/richard-smith-the-irrationality-of-the-ref/
3 The Guidelines say that “No sub-panel will make any use of journal impact factors, rankings, lists or the perceived standing of publishers in assessing the quality of research outputs.” http://www.ref.ac.uk/faq/researchoutputsref2/