Times Higher Education and Thomson Reuters are considering changes to their ranking methodology. It seems that the research impact indicator (citations) will figure prominently in their considerations. Phil Baty writes:
In a consultation document circulated to the platform group, Thomson Reuters suggests a range of changes for 2011-12.It would be very wise to do something drastic about the citations indicator. According to last year's rankings, Alexandria university is the fourth best university in the world for research impact, Hong Kong Baptist University is second in Asia, Ecole Normale Superieure Paris best in Europe with Royal Holloway University of London fourth, University of California Santa Cruz fourth in the USA and the University of Adelaide best in Australia.
A key element of the 2010-11 rankings was a "research influence" indicator, which looked at the number of citations for each paper published by an institution. It drew on some 25 million citations from 5 million articles published over five years, and the data were normalised to reflect variations in citation volume between disciplines.
Thomson Reuters and THE are now consulting on ways to moderate the effect of rare, exceptionally highly cited papers, which could boost the performance of a university with a low publication volume.
One option would be to increase the minimum publication threshold for inclusion in the rankings, which in 2010 was 50 papers a year.
Feedback is also sought on modifications to citation data reflecting different regions' citation behaviour.
Thomson Reuters said that the modifications had allowed "smaller institutions with good but not outstanding impact in low-cited countries" to benefit.
If anyone would like to justify these results they are welcome to post a comment.
I would like to make these suggestions for modifying the citations indicator.
Do not count self-citations, citations to the same journal in which a paper is published or citations to the same university. This would reduce, although not completely eliminate, manipulation of the citation system. If this is not done there will be massive self citation and citation of friends and colleagues. It might even be possible to implement a measure of net citation by deducting citations from an institution from the citations to it, thus reduce the effect of tacit citation agreements.
Normalisation by subject field is probably going to stay. It is reasonable that some consideration should be given to scholars who work in fields where citations are delayed and infrequent. However, it should be recognised that the purpose of this procedure is to identify pockets of excellent and research institutions are not built around a few pockets or even a single one. There are many ways of measuring research impact and this is just one of them. Others that might be used include total citations, citations per faculty, citations per research income and h-index.
Normalisation by year is especially problematical and should be dropped. It means that a handful of citations to an article classified as being in a low-citation discipline in the same year could dramatically multiply the score for this indicator. It also introduces an element of potential instability. Even if the methodology remains completely unchanged this year, Alexandria and Bilkent and others are going to drop scores of places as papers go on receiving citations but get less value from them as the benchmark number rises.
Raising the threshold of number of publications might not be a good idea. It is certainly true that Leiden University have a threshold of 400 publications a year but Leiden is measuring only research impact while THE and TR are measuring a variety of indicators. There are already too many blank spaces in these rankings and their credibility will be further undermined if universities are not assessed on an indicator with such a large weighting.