Sunday, September 17, 2017

Criticism of rankings from India

Some parts of the world seem to be increasingly sceptical of international rankings, or least those produced by Times Higher Education (THE). MENA (Middle East and North Africa) and Africa did not seem to be very enthusiastic about THE's snapshot or pilot rankings. Many Latin American universities have chosen not to participate in the world and regional rankings.

India also seems to be suspicious of the rankings. An article by Vyasa Shastri in the E-paper, livemint, details some of the ways in which universities might attempt to manipulate rankings to their advantage.

It is well worth reading although I have one quibble. The article refers to King Abdulaziz University recruiting faculty who would list the university as their secondary affiliation (now 41) when publishing papers. The original idea was to get top marks in the Shanghai Ranking's highly cited researchers indicator. The article correctly notes that the Shanghai rankings no longer count secondary affiliations but they can still help in the Nature and Science and publications indicators and in citations and publications metrics in other rankings.

Also, other Saudi universities do not recruit large numbers of secondary affiliations. There are only four for the rest of Saudi Arabia although I notice that there are now quite a few for Chinese and Australian universities, including five for the University of Melbourne.

Last word, I hope, on Babol Noshirvani University of Technology

If you type in 'Babol University of Technology" rather than 'Babol Noshirvani University of Technology' into the Scopus search box then the university does have enough publications to meet THE's criteria for inclusion the world rankings.

So it seems that it was those highly cited researchers in engineering that propelled the university into the research impact stratosphere. That, and a rather eccentric methodology.

Saturday, September 09, 2017

More on Babol Noshirvani University of Technology

To answer the question in the previous post, how did Babol Noshirvani University of Technology in Iran do so well in the latest THE rankings, part of the answer is that it has two highly cited researchers in engineering, Davood Domiri Ganji and Mohsen Sheikholeslami. I see no reason to question the quality of their research.

But I still have a couple of questions. First THE say that they exclude universities whose research output is less than 1,000 articles between 2012 and 2016. But checking with Scopus indicates that the university had 468 articles over that period, or 591 documents of all kinds including conference papers, book chapters and reviews, which seems way below the threshold level for inclusion. Is it possible that THE have included the Babol University of Medical Sciences in the count of publications or citations? 

Those documents have been cited a total of 2,601 times, which is respectable but not quite on a scale that would rival Oxford and Chicago. It is possible that some or one of those articles have, for some reason, got an unusual number of citations compared to the world average and that this has distorted the indicator score. If so, then we have yet another example of a defective methodology producing absurd results.

Friday, September 08, 2017

Why did Babol Noshirvani University of Technology do so well in the THE rankings?

The THE world rankings and their regional offshoots have always been a source of entertainment mixed with a little bit of bewilderment. Every year a succession of improbable places jumps into the upper reaches of the citations indicator which is supposed to measure global research impact. Usually it is possible to tell what happened  Often it is because of participation in a massive international physics project, although not so much over the last couple of years, contribution to a global medical or genetics survey, or even assiduous self-citation.

However, after checking with Scopus and the Web of Science, I still cannot see exactly how Babol Noshirvani University of Technology got into 14th place for this metric, equal to Oxford and ahead of Yale and Johns Hopkins, in the latest world rankings and 301-350 overall, well ahead of every other Iranian university?

Can anybody help with an explanation? 

Wednesday, September 06, 2017

Highlights from THE citations indicator

The latest THE world rankings were published yesterday. As always, the most interesting part is the field- and year- normalised citations indicator that supposedly measures research impact.

Over the last few years, an array of implausible places have zoomed into the top ranks of this metric, sometimes disappearing as rapidly as they arrived.

The first place for citations this year goes to MIT. I don't think anyone would find that very controversial.

Here are some of the institutions that feature in the top 100 of THE's most important indicator which has a weighting of 30 per cent.

2nd     St. George's, University of London
3rd=    University of California Santa Cruz, ahead of Berkeley and UCLA
6th =   Brandeis University, equal to Harvard
11th=   Anglia Ruskin University, UK, equal to Chicago
14th=   Babol Noshirvani University of Technology, Iran, equal to Oxford
16th=   Oregon Health and Science University
31st     King Abdulaziz University, Saudi Arabia
34th=   Brighton and Sussex Medical School, UK, equal to Edinburgh
44th     Vita-Salute San Raffaele University, Italy, ahead of the University of Michigan
45th=   Ulsan National Institute of Science and Technology, best in South Korea
58th=   University of Kiel, best in Germany and equal to King's College London
67th=   University of Iceland
77th=   University of Luxembourg, equal to University of Amsterdam

Thursday, August 24, 2017

Milestone passed

The previous post was the 1,000th.

Comment by Christian Scholz

This comment is by Christian Schulz of the University of Hamburg. He points that the University of Hamburg's rise in the Shanghai rankings was not the result of highly cited researchers moving from other institutions but the improvement of research within the university.

If this is something that applies to other German universities, then it could be that Germany has a policy of growing its own researchers rather than importing talent from around the world. It seems to have worked very well for football so perhaps the obsession of British universities with importing international researchers is not such a good idea..

I just wanted to share with you, that we did not acquire two researchers to get on the HCR List to get a higher rank in the Shanghai Ranking. Those two researchers are Prof. Büchel and Prof. Ravens-Sieberer. Prof. Büchel is working at our university for over a decade now and Prof. Ravens-Sieberer is at our university since 2008.

Please also aknowledge, that our place in the Shanghai Ranking was very stable from 2010-2015. We were very unpleasent, when they decided to only use the one-year list of HCR, because in 2015 none of our researchers made it on the 2015-list, which caused the descend from 2015 to 2016.

Guest Post by Pablo Achard

This post is by Pablo Achard of the University of Geneva. It refers to  the Shanghai subject rankings. However, the problem of outliers in subject and regional rankings is one that affects all the well known rankings and will probably become more important over the next few years

How a single article is worth 60 places

We can’t repeat it enough: an indicator is bad when a small variation in the input is overly amplified in the output. This is the case when indicators are based on very few events.

I recently came through this issue (again) with Shanghai’s subject ranking of universities. The universities of Geneva and Lausanne (Switzerland) share the same School of Pharmacy and a huge share of published articles in this discipline are signed under the name of both institutions. But in the “Pharmacy and pharmaceutical sciences” ranking, one is ranked between the 101st and 150th position while the other is 40th. Where does this difference come from?

Comparing the scores obtained under each category gives a clue

Weight in the final score
Weighted sum

So the main difference between the two institutions is the score in “TOP”. Actually, the difference in the weighted sum (40.7) is almost equal to the value of this score (40.8). If Geneva and Lausanne had the same TOP score, they would be 40th and 41st

Surprisingly, a look at other institutions for that TOP indicator show only 5 different values : 0, 40.8, 57.7, 70.7 and 100. According to the methodology page of the ranking, “TOP is the number of papers published in Top Journals in an Academic Subject for an institution during the period of 2011-2015. Top Journals are identified through ShanghaiRanking’s Academic Excellence Survey […] The list of the top journals can be found here  […] Only papers of ‘Article’ type are considered.”
Looking deeper, there is just one journal in this list for Pharmacy: NATURE REVIEWS DRUG DISCOVERY. As its name indicates, this recognized journal mainly publishes ‘reviews’. A search on Web of Knowledge shows that in the period 2011-2015, only 63 ‘articles’ were published in this journal. That means a small variation in the input is overly amplified.

I searched for several institutions and rapidly found this rule: Harvard published 4 articles during these five years and got a score of 100 ; MIT published 3 articles and got a score of 70.7 ; 10 institutions published 2 articles and got a 57.7 and finally about 50 institutions published 1 article and got a 40.8.

I still don’t get why this score is so unlinear. But Lausanne published one single article in NATURE REVIEWS DRUG DISCOVERY and Geneva none (they published ‘reviews’ and ‘letters’ but no ‘articles’) and that small difference led to at least a 60 places gap between the two institutions.

This is of course just one example of what happens too often: rankers want to publish sub-rankings and end up with indicators where outliers can’t be absorbed into large distributions. One article, one prize or one  co-author in a large and productive collaboration all of the sudden makes very large differences in final scores and ranks.