Thursday, October 26, 2006

The World’s Best Science Universities?

The Times Higher Education Supplement (THES) has now started to publish lists of the world’s top 100 universities in five disciplinary areas. The first to appear were those for science and technology.

THES publishes scores for its peer review by people described variously as “research-active academics” or just as “smart people” of the disciplinary areas along with the number of citations per paper. The ranking is, however, based solely on the peer review, although a careless reader might conclude that the citations were considered as well.

We should ask for a moment what a peer review, essentially a measure of a university’s reputation, can accomplish that an analysis of citations cannot. A citation is basically an indication that another researcher has found something of interest in a paper. The number of citations of a paper indicates how much interest a paper has aroused among the community of researchers. It coincides closely with the overall quality of research, although occasionally a paper may attract attention because there is something very wrong with it.

Citations then are a good measure of a university’s reputation for research. For one thing, votes are weighted. A researcher who publishes a great deal has more votes and his or her opinion will have more weight than someone who publishes nothing. There are abuses of course. Some researchers are rather too fond of citing themselves and journals have been known to ask authors to cite papers by other researchers whose work they have published but such practices do not make a substantial difference.

In providing the number of citations per paper as well as the score for peer review, THES and their consultants, QS Quacquarelli Symonds, have really blown their feet off. If the scores for peer review and the citations are radically different it almost certainly means that there is something wrong with the review. The scores are in fact very different and there is something very wrong with the review.

This post will review the THES rankings for science.

Here are the top twenty universities for the peer review in science:

1. Cambridge
2. Oxford
3. Berkeley
4. Harvard
5. MIT
6. Princeton
7. Stanford
8. Caltech
9. Imperial College, London
10. Tokyo
11. ETH Zurich
12. Beijing (Peking University)
13. Kyoto
14. Yale
15. Cornell
16. Australian National University
17. Ecole Normale Superieure, Paris
18. Chicago
19. Lomonosov Moscow State University
20. Toronto


And here are the top 20 universities ranked by citations per paper:


1. Caltech
2. Princeton
3. Chicago
4. Harvard
5. John Hopkins
6. Carnegie-Mellon
7. MIT
8. Berkeley
9. Stanford
10. Yale
11. University of California at Santa Barbara
12. University of Pennsylvania
13. Washington (Saint Louis?)
14. Columbia
15. Brown
16. University of California at San Diego
17. UCLA
18. Edinburgh
19. Cambridge
20. Oxford


The most obvious thing about the second list is that it is overwhelmingly dominated by American universities with the top 17 places going to the US. Cambridge and Oxford, first and second in the peer review, are 19th and 20th by this measure. Imperial College London. Beijing, Tokyo, Kyoto and the Australian National University are in the top 20 for peer review but not for citations.

Some of the differences are truly extraordinary. Beijing is 12th for peer review and 77th for citations, Kyoto13th and 57th, the Australian National University 16th and 35th Ecole Normale Superieure, Paris 17th and 37th, Lomsonov State University, Moscow 18th and 82nd National University of Singapore, 25th and 75th, Sydney 35th and 70th , Toronto 20th and 38th. Bear in mind that there are almost certainly several universities that were not in the peer review top 100 but have more citations per paper than some of these institutions.

It is no use saying that citations are biased against researchers who do not publish in English. For better or worse, English is the lingua franca of the natural sciences and technology and researchers and universities that do not publish extensively in English will simply not be noticed by other academics. Also, a bias towards English does not explain the comparatively poor performance by Sydney, ANU and the National University of Singapore and their high ranking on the peer review.

Furthermore, there are some places for which no citation score is given. Presumably, they did not produce enough papers to be even considered. But if they produce so few papers, how could they become so widely known that their peers would place them in the world’s top 100? These universities are:

Indian Institutes of Technology (all of them)
Monash
Auckland
Universiti Kebangsaan Malaysia
Fudan
Warwick
Tokyo Institute of Technology
Hong Kong University of Science and Technology
Hong Kong
St. Petersburg
Adelaide
Korean Advanced Institute of Science and Technology
New York University
King’s College London
Nanyang Technological University
Vienna Technical University
Trinity College Dublin
Universiti Malaya
Waterloo

These universities are overwhelmingly East Asian, Australian and European. None of them appear to be small, specialized universities that might produce a small amount of high quality research.

The peer review and citations per paper thus give a totally different picture. The first suggests that Asian and European universities are challenging those of the United States and that Oxford and Cambridge are the best in the world. The second indicates that the quality of research of American universities is still unchallenged, that the record of Oxford and Cambridge is undistinguished and that East Asian and Australian universities have a long way to go before being considered world class in any meaningful sense of the word.

A further indication of how different the two lists are can be found by calculating their correlation. Overall, the correlation is, as expected, weak (.390). For Asia-Pacific (.217) and for Europe (.341) it is even weaker and statistically insignificant. If we exclude Australia from the list of Asia-Pacific universities and just consider the remaining 25, there is almost no association at all between the two measures. The correlation is .099, for practical purposes no better than chance. Whatever criteria the peer reviewers used to pick Asian universities, quality of research could not have been among them.

So has the THES peer review found out something that is not apparent from other measures? Is it possible that academics around the world are aware of research programmes that have yet to produce large numbers of citations? This, frankly, is quite implausible since it would require that nascent research projects have an uncanny tendency to concentrate in Europe, East Asia and Australia.

There seems to be no other explanation for the overrepresentation of Europe, East Asia and Australia in the science top 100 than a combination of a sampling procedure that included a disproportionate number of respondents from these regions, allowing or encouraging respondents to nominate universities in their own regions or even countries and a disproportionate distribution of forms to certain countries within regions.

I am not sure whether this is the result of extreme methodological naivety, with THES and QS thinking that they are performing some sort of global affirmative action by rigging the vote in favour of East Asia and Europe or whether it is a cynical attempt to curry favour with those regions that are involved in the management education business or are in the forefront of globalization.

Whatever is going on, the peer review gives a very false picture of current research performance in science. If people are to apply for universities or accept jobs or award grants in the belief that Beijing is better at scientific research than Yale, ANU than Chicago, Lomonosov than UCLA, Tsinghua than Johns Hopkins then they are going to make bad decisions.

If this is unfair then there is no reason why THES or QS should not indicate the following:

The universities and institutions to which the peer review forms were sent.
The precise questions that were asked.
The number of nominations received by universities from outside their own regions and countries.
The response rate.
The criteria by which respondents were chosen.

Until THES and /or QS do this, we can only assume that the rankings are an example of how almost any result can be produced with the appropriate, or inappropriate, research design.

No comments: