Sunday, September 17, 2017

Last word, I hope, on Babol Noshirvani University of Technology

If you type in 'Babol University of Technology" rather than 'Babol Noshirvani University of Technology' into the Scopus search box then the university does have enough publications to meet THE's criteria for inclusion the world rankings.

So it seems that it was those highly cited researchers in engineering that propelled the university into the research impact stratosphere. That, and a rather eccentric methodology.

Saturday, September 09, 2017

More on Babol Noshirvani University of Technology

To answer the question in the previous post, how did Babol Noshirvani University of Technology in Iran do so well in the latest THE rankings, part of the answer is that it has two highly cited researchers in engineering, Davood Domiri Ganji and Mohsen Sheikholeslami. I see no reason to question the quality of their research.

But I still have a couple of questions. First THE say that they exclude universities whose research output is less than 1,000 articles between 2012 and 2016. But checking with Scopus indicates that the university had 468 articles over that period, or 591 documents of all kinds including conference papers, book chapters and reviews, which seems way below the threshold level for inclusion. Is it possible that THE have included the Babol University of Medical Sciences in the count of publications or citations? 

Those documents have been cited a total of 2,601 times, which is respectable but not quite on a scale that would rival Oxford and Chicago. It is possible that some or one of those articles have, for some reason, got an unusual number of citations compared to the world average and that this has distorted the indicator score. If so, then we have yet another example of a defective methodology producing absurd results.




Friday, September 08, 2017

Why did Babol Noshirvani University of Technology do so well in the THE rankings?

The THE world rankings and their regional offshoots have always been a source of entertainment mixed with a little bit of bewilderment. Every year a succession of improbable places jumps into the upper reaches of the citations indicator which is supposed to measure global research impact. Usually it is possible to tell what happened  Often it is because of participation in a massive international physics project, although not so much over the last couple of years, contribution to a global medical or genetics survey, or even assiduous self-citation.

However, after checking with Scopus and the Web of Science, I still cannot see exactly how Babol Noshirvani University of Technology got into 14th place for this metric, equal to Oxford and ahead of Yale and Johns Hopkins, in the latest world rankings and 301-350 overall, well ahead of every other Iranian university?

Can anybody help with an explanation? 

Tuesday, September 05, 2017

Highlights from THE citations indicator


The latest THE world rankings were published yesterday. As always, the most interesting part is the field- and year- normalised citations indicator that supposedly measures research impact.

Over the last few years, an array of implausible places have zoomed into the top ranks of this metric, sometimes disappearing as rapidly as they arrived.

The first place for citations this year goes to MIT. I don't think anyone would find that very controversial.

Here are some of the institutions that feature in the top 100 of THE's most important indicator which has a weighting of 30 per cent.

2nd     St. George's, University of London
3rd=    University of California Santa Cruz, ahead of Berkeley and UCLA
6th =   Brandeis University, equal to Harvard
11th=   Anglia Ruskin University, UK, equal to Chicago
14th=   Babol Noshirvani University of Technology, Iran, equal to Oxford
16th=   Oregon Health and Science University
31st     King Abdulaziz University, Saudi Arabia
34th=   Brighton and Sussex Medical School, UK, equal to Edinburgh
44th     Vita-Salute San Raffaele University, Italy, ahead of the University of Michigan
45th=   Ulsan National Institute of Science and Technology, best in South Korea
58th=   University of Kiel, best in Germany and equal to King's College London
67th=   University of Iceland
77th=   University of Luxembourg, equal to University of Amsterdam












Thursday, August 24, 2017

Milestone passed

The previous post was the 1,000th.

Comment by Christian Scholz

This comment is by Christian Schulz of the University of Hamburg. He points that the University of Hamburg's rise in the Shanghai rankings was not the result of highly cited researchers moving from other institutions but the improvement of research within the university.

If this is something that applies to other German universities, then it could be that Germany has a policy of growing its own researchers rather than importing talent from around the world. It seems to have worked very well for football so perhaps the obsession of British universities with importing international researchers is not such a good idea..


I just wanted to share with you, that we did not acquire two researchers to get on the HCR List to get a higher rank in the Shanghai Ranking. Those two researchers are Prof. Büchel and Prof. Ravens-Sieberer. Prof. Büchel is working at our university for over a decade now and Prof. Ravens-Sieberer is at our university since 2008.

Please also aknowledge, that our place in the Shanghai Ranking was very stable from 2010-2015. We were very unpleasent, when they decided to only use the one-year list of HCR, because in 2015 none of our researchers made it on the 2015-list, which caused the descend from 2015 to 2016.

Guest Post by Pablo Achard

This post is by Pablo Achard of the University of Geneva. It refers to  the Shanghai subject rankings. However, the problem of outliers in subject and regional rankings is one that affects all the well known rankings and will probably become more important over the next few years


How a single article is worth 60 places

We can’t repeat it enough: an indicator is bad when a small variation in the input is overly amplified in the output. This is the case when indicators are based on very few events.

I recently came through this issue (again) with Shanghai’s subject ranking of universities. The universities of Geneva and Lausanne (Switzerland) share the same School of Pharmacy and a huge share of published articles in this discipline are signed under the name of both institutions. But in the “Pharmacy and pharmaceutical sciences” ranking, one is ranked between the 101st and 150th position while the other is 40th. Where does this difference come from?

Comparing the scores obtained under each category gives a clue

Geneva
Lausanne
Weight in the final score
PUB
46
44.3
1
CNCI
63.2
65.6
1
IC
83.6
79.5
0.2
TOP
0
40.8
1
AWARD
0
0
1
Weighted sum
125.9
166.6


So the main difference between the two institutions is the score in “TOP”. Actually, the difference in the weighted sum (40.7) is almost equal to the value of this score (40.8). If Geneva and Lausanne had the same TOP score, they would be 40th and 41st

Surprisingly, a look at other institutions for that TOP indicator show only 5 different values : 0, 40.8, 57.7, 70.7 and 100. According to the methodology page of the ranking, “TOP is the number of papers published in Top Journals in an Academic Subject for an institution during the period of 2011-2015. Top Journals are identified through ShanghaiRanking’s Academic Excellence Survey […] The list of the top journals can be found here  […] Only papers of ‘Article’ type are considered.”
Looking deeper, there is just one journal in this list for Pharmacy: NATURE REVIEWS DRUG DISCOVERY. As its name indicates, this recognized journal mainly publishes ‘reviews’. A search on Web of Knowledge shows that in the period 2011-2015, only 63 ‘articles’ were published in this journal. That means a small variation in the input is overly amplified.

I searched for several institutions and rapidly found this rule: Harvard published 4 articles during these five years and got a score of 100 ; MIT published 3 articles and got a score of 70.7 ; 10 institutions published 2 articles and got a 57.7 and finally about 50 institutions published 1 article and got a 40.8.

I still don’t get why this score is so unlinear. But Lausanne published one single article in NATURE REVIEWS DRUG DISCOVERY and Geneva none (they published ‘reviews’ and ‘letters’ but no ‘articles’) and that small difference led to at least a 60 places gap between the two institutions.


This is of course just one example of what happens too often: rankers want to publish sub-rankings and end up with indicators where outliers can’t be absorbed into large distributions. One article, one prize or one  co-author in a large and productive collaboration all of the sudden makes very large differences in final scores and ranks. 

Friday, August 18, 2017

Comment on the 2017 Shanghai Rankings

In the previous post I referred to the vulnerabilities that have developed in the most popular world rankings, THE, QS and Shanghai ARWU, indicators that have a large weighting and can be influenced by universities that know how to work the system or sometimes are just plain lucky.

In the latest QS rankings four universities from Mexico, Chile, Brazil and Argentina have 90+ scores for the academic reputation indicator, which has a 40% weighting. All of these universities have low scores for citations per faculty which would seem at odds with a stellar research reputation. In three cases QS does not even list the score in its main table.

I have spent so much time on the normalised citation indicator in the THE world and regional rankings that I can hardly bear to revisit the issue. I will just mention the long list of universities that have achieved improbable glory by a few researchers, or sometimes just one, on a multi-author international physics, medical or genetics project.

The Shanghai rankings were once known for their stability but have become more volatile recently. The villain here is the highly cited researchers indicator which has a 20% weighting and consists of those scientists included in the  lists now published by Clarivate Analytics.

It seems that several universities have now become aware that if they can recruit a couple of extra highly cited researchers to the faculty they can get a significant boost in these rankings. Equally, if they should be so careless to lose one or two then the ranking consequences could be most unfortunate.

In 2016 a single highly cited researcher was worth 10.3 points in the Shanghai rankings, or 2.06 on the overall score after weighting, which is the difference between 500th place and 386th. That is a good deal, certainly much better than hiring a team of consultants or sending staff for excruciating transformational sharing sessions

Of course, as the number of HiCis increases the value of each incremental diminishes so it would make little difference if a top 20 or 30 university added or lost a couple of researchers.

Take a look at some changes in the Shanghai rankings between 2016 and 2017. The University of Kyoto fell three places from 32nd to 35th place or 0.5 points from 37.2 to 36.7. This was due to a fall in the number of highly cited researchers from seven to five which meant a fall of 2.7 in the HiCi score or a weighted 0.54 points in the overall score.

McMaster University rose from 83rd to 66th  gaining 2.5 overall points. The HiCi  score went from 32.4 to 42.3,  equivalent to  1.98 weighted overall points, representing an increase in the number of such researchers from 10 to 15.

Further down the charts,the University of Hamburg rose from 256th  with an overall  score of 15.46 to  188th with a  score of 18.69, brought about largely by an improvement in the  HiCi score  from zero to 15.4 which was the result of the acquisition of tworesearchers.

Meanwhile the Ecole Polytechnique of Paris fell from 303rd place to 434th partly because of the loss of its only highly cited researcher.

It is time for ShanghaiRanking to start looking around for a Plan B for their citations indicator.