Friday, April 07, 2017

Job Application

A few years ago an elderly school teacher told me about a pupil who when asked to write an application for a dream job chose Archbishop of Canterbury "because I believe in God and know lots of Bible stories." These days he'd probably be over-qualified but never mind.

So I think it's time to start sending out applications to Ranking Task Forces and the like. I know those zeros at the end of a number are important, that I should click submit AFTER filling in the data field, and that Stellenbosch is in Africa.

Update: Corrected a spelling error in the title without a complaint from anyone.

Thursday, April 06, 2017

Trinity College Shoots Itself in the Other Foot

The story so far. Trinity College Dublin (TCD) has been flourishing over the last decade according to the Shanghai and Round University Rankings (RUR) world rankings which have a stable methodology.  The university leadership has, however, been complaining about its decline in the Times Higher Education (THE) and QS rankings, which is attributed to the philistine refusal of the government to give TCD the money that it wants.

It turns out that the decline in the THE rankings was due to a laughable error. TCD had submitted incorrect data to THE, 355 Euro for total income, 111 for research income and 5 for income from industry instead of 355 million, 111 million and 5 million. Supposedly, this was the result of an "innocent mistake." 

Today, the Round University Rankings released their 2017 league table. These rankings are derived from Global Institutional Profiles Project (GIPP) run by Thomson Reuters and now by Clarivate Analytics and used until 2014 by THE. TCD has fallen from 102nd place to 647th, well below Maynooth and the Dublin Institute of Technology. The decline was catastrophic for the indicators based on institutional data and very slight for those derived from surveys and bibliometric information.

What happened? It was not the tight fists of the government. TCD apparently just submitted the data form to GIPP without providing data. 

No doubt another innocent mistake. It will be interesting to see what the group of experts in charge of rankings at TCD has to say about this.

By the way, University College Dublin continues to do well in these rankings, falling a little bit from 195th to 218th. 


Doing Something About Citations and Affiliations

University rankings have proliferated over the last decade. The International Rankings Expert Group's (IREG) inventory of national rankings counted 60 and there are now 40 international rankings including global, regional, subject, business school and system rankings.

In addition, there have been  a variety of spin offs and extracts from the global rankings, especially those published by Times Higher Education, including Asian, Latin American, African, MENA, Young University rankings and most international universities. The value of these varies but that of the Asian rankings must now be considered especially suspect.

THE have just released the latest edition of their Asian rankings using the world rankings indicators with a recalibration of the weightings. They have reduced the weighting given to the teaching and research reputation surveys and increased that for research income, research productivity and income from industry. Unsurprisingly, Japanese universities, with good reputations but affected by budget cuts, have performed less well than in the world rankings.

These rankings have, as usual, produced some results that are rather counter intuitive and illustrate the need for THE, other rankers and the academic publishing industry to introduce some reforms in the presentation and counting of publications and citations.

As usual, the oddities in the THE Asian rankings have a lot to do with the research impact indicator supposedly measured by citations. This, it needs to be explained, does not simply count the number of citations but compares them with the world average for over three hundred fields, five years of publications and six years of citations. Added to all that is a "regional modification" applied to half of the indicator by which the score for each university is divided by the square root of the score for the country in which the university is located. This effectively gives a boost to everybody except those places in the top scoring country, one that can be quite significant for countries with a low citation impact.

What this means is that a university with  a minimal number of papers can rack up a large and disproportionate score if it can collect large numbers of citations for a relatively small number of papers. This appears to be what has contributed to the extraordinary success of the institution variously known as Vel Tech University, Veltech University, Veltech Dr. RR & Dr. SR University and Vel Tech Rangarajan Dr Sagunthala R & D Institute of Science and Technology.

The university has scored a few local achievements, most recently ranking 58th for engineering institutions in the latest Indian NRIF rankings, but internationally, as Ben Sowter indicated in Quora, it is way down the ladder or even unable to get onto the bottom rung.

So how did it get to be the third best university and best private university in India according to the THE Asian rankings? How could it have the highest research impact of any university in Chennai, Tamil Nadu, India and Asia and perhaps the highest or second highest in the world.

Ben Sowter of QS Intelligence Unit has provided the answer. It is basically due to industrial scale self-citation.

"Their score of 100 for citations places them as the topmost university in Asia for citations, more than 6 points clear of their nearest rival. This is an indicator weighted at 30%. Conversely, and very differently from other institutions in the top 10 for citations, with a score of just 8.4 for research, they come 285/298 listed institutions. So an obvious question emerges, how can one of the weakest universities in the list for research, be the best institution in the list for citations?
The simple answer? It can’t. This is an invalid result, which should have been picked up when the compilers undertook their quality assurance checks.
It’s technically not a mistake though, it has occurred as a result of the Times Higher Education methodology not excluding self-citations, and the institution appears to have, for either this or other purposes, undertaken a clear campaign to radically promote self-citations from 2015 onwards.
In other words and in my opinion, the university has deliberately and artificially manipulated their citation records, to cheat this or some other evaluation system that draws on them.
The Times Higher Education methodology page explains: The data include the 23,000 academic journals indexed by Elsevier’s Scopus database and all indexed publications between 2011 and 2015. Citations to these publications made in the six years from 2011 to 2016 are also collected.
So let’s take a look at the Scopus records for Vel Tech for those periods. There are 973 records in Scopus on the primary Vel Tech record for the period 2011–2015 (which may explain why Vel Tech have not featured in their world ranking which has a threshold of 1,000). Productivity has risen sharply through that period from 68 records in 2011 to 433 records in 2015 - for which due credit should be afforded.
The issue begins to present itself when we look at the citation picture. "
He continues:
 "That’s right. Of the 13,864 citations recorded for the main Vel Tech affiliation in the measured period 12,548 (90.5%) are self-citations!!
A self-citation is not, as some readers might imagine, one researcher at an institution citing another at their own institution, but that researcher citing their own previous research, and the only way to a group of researchers will behave that way collectively on this kind of scale so suddenly, is to have pursued a deliberate strategy to do so for some unclear and potentially nefarious purpose.
It’s not a big step further to identify some of the authors who are most clearly at the heart of this strategy by looking at the frequency of their occurence amongst the most cited papers for Vel Tech. Whilst this involves a number of researchers, at the heart of it seems to be Dr. Sundarapandian Vaidyanathan, Dean of the R&D Center.
Let’s take as an example, a single paper he published in 2015 entitled “A 3-D novel conservative chaotic system and its generalized projective synchronization via adaptive control”. Scopus lists 144 references, 19 of which appear to be his own prior publications. The paper has been cited 114 times, 112 times by himself in other work."

In addition, the non-self citations are from a very small number of people, including his co-authors. Basically his audience is himself and a small circle of friends.

Another point is that Dr Vaidyanathan has published in a limited of journals and conference proceedings the most important of which are the International Journal of Pharmtech Research and the International Journal of Chemtech Research, both of which have Vaidyanathan as an associate editor. My understanding of Scopus procedures for inclusion and retention in the database is that the number of citations is very important. I was once associated with a journal that was highly praised by the Scopus reviewers for the quality of its contents but rejected because it had few citations. I wonder if Scopus's criteria include watching out for self-citations.

The Editor in Chief of the International Journal of Chemtech Research is listed as Bhavik J Bhatt who received his Ph D from the University of Iowa in 2013 and does not appear to have ever held a full time university post.

The Editor in Chief of the International Journal of Pharmtech Research is Moklesur R Sarker, associate professor at Lincoln University College Malaysia, which in 2015 was reported to be in trouble for admitting bogus students.

I will be scrupulously fair and quote Dr Vaidyanathan.

"I joined Veltech University in 2009 as a Professor and shortly, I joined the Research and Development Centre at Veltech University. My recent research areas are chaos and control theory. I like to stress that research is a continuous process, and research done in one topic becomes a useful input to next topic and the next work cannot be carried on without referring to previous work. My recent research is an in-depth study and discovery of new chaotic and hyperchaotic systems, and my core research is done on chaos, control and applications of these areas. As per my Scopus record, I have published a total of 348 research documents. As per Scopus records, my work in chaos is ranked as No. 2, and ranked next to eminent Professor G. Chen. Also, as per Scopus records, my work in hyperchaos is ranked as No. 1, and I have contributed to around 50 new hyperchaotic systems. In Scopus records, I am also included in the list of peers who have contributed in control areas such as ‘Adaptive Control’, ‘Backstepping Control’, ‘Sliding Mode Control’ and ‘Memristors’. Thus, the Scopus record of my prolific research work gives ample evidence of my subject expertise in chaos and control. In this scenario, it is not correct for others to state that self-citation has been done for past few years with an intention of misleading others. I like to stress very categorically that the self-citations are not an intention of me or my University.         
I started research in chaos theory and control during the years 2010-2013. My visit to Tunisia as a General Chair and Plenary Speaker in CEIT-2013 Control Conference was a turning point in my research career. I met many researchers in control systems engineering and I actively started my research collaborations with foreign faculty around the world. From 2013-2016, I have developed many new results in chaos theory such as new chaotic systems, new hyperchaotic systems, their applications in various fields, and I have also published several papers in control techniques such as adaptive control, backstepping control, sliding mode control etc. Recently, I am also actively involved in new areas such as fractional-order chaotic systems, memristors, memristive devices, etc."
...
"Regarding citations, I cite the recent developments like the discovery of new chaotic and hyperchaotic systems, recent applications of these systems in various fields like physics, chemistry, biology, population ecology, neurology, neural networks, mechanics, robotics, chaos masking, encryption, and also various control techniques such as active control, adaptive control, backstepping control, fuzzy logic control, sliding mode control, passive control, etc,, and these recent developments include my works also."


His claim that self citation was not his intention is odd. Was he citing in his sleep or was he possessed by an evil spirit when he wrote his papers or signed off on them? The claim about citing recent developments that include his own work misses the point. Certainly somebody like Chomsky would cite himself when reviewing developments in formal linguistics but he would also be cited by other people. Aside from himself and his co-authors Dr Vaidyanathan is cited by almost nobody.

The problems with the citations indicator in the THE Asian rankings do not end there. Here are a few cases of universities with very low scores for research and unbelievably high scores for research impact

King Abdulaziz University is ranked second in Asia for research impact. This is an old story and it is achieved by the massive recruitment of adjunct faculty culled from the lists of highly cited researchers.

Toyota Technological Institute is supposedly best in Japan for research impact, which I suspect would be news to most Japanese academics, but 19th for research.

Atilim University in Ankara is supposedly the best in Turkey for research impact but also has a very low score for research.

The high citations score for Quaid i Azam University in Pakistan results from participation in the multi-author physics papers derived from the CERN projects. In addition, there is one hyper productive researcher in applied mathematics.

Tokyo Metropolitan University gets a high score for citation because of a few much cited papers in physics and molecular genetics.

Bilkent university is a contributor to frequently cited multi-author papers in genetics.

According to THE Universiti Tunku Abdul Rahman (UTAR) is the second best university in Malaysia and best for research impact, something that will come as a surprise to anyone with the slightest knowledge of Malaysian higher education. This is because of participation in the global burden of disease study, whose papers propelled Anglia Ruskin University to the apex of British research. Other universities with disproportionate scores for research impact include Soochow University  China, North East Normal University China, Jordan University of Science and Technology, Panjab University India, Comsats Institute of Information Technology Pakistan and Yokohama City University Japan.

There are some things that the ranking and academic publishing industries need to do about the collection, presentation and distribution of publications and citations data.


1.  All rankers should exclude self- citations from citation counts. This is very easy to do, just clicking a box, and has been done by QS since 2011. It would be even better if intra-university and intra-journal citations were excluded as well.

2.  There will almost certainly be a growing problem with the recruitment of adjunct staff who will be asked to do no more than list an institution as a secondary affiliation when publishing papers. It would be sensible if academic publishers simply insisted that there be only one affiliation per author. If they do not it should be possible for rankers to count only the first named author.

3.  The more fields there are the greater the chances that rankings can be skewed by strategically or accidentally placed citations. The number of fields used for normalisation should be kept to a reasonable number.

4. A visit to the Leiden Ranking website and a few minutes tinkering with their settings and parameters will show that citations can be used to measure several different things. Rankers should use more than one indicator to measure citations.

5. It defies common sense for any ranking to give a greater weight to citations than to publications. Rankers need to review the weighting given to their citation indicators. In particular,  THE needs to think about their regional modification. which has the effect, noted above, of increasing the citations score for nearly everybody and so pushing the actual weighting of the indicator above 30 per cent.

6. Academic publishers and databases like Scopus and Web of Science need to audit journals on a regular basis.



Tuesday, April 04, 2017

The Trinity Affair Gets Worse


Trinity College Dublin (TCD) has been doing extremely well over the last few years, especially in research. It has risen in the Shanghai ARWU rankings from the 201-300 to the 151-200 band and from 174th to 102nd  in the RUR rankings.

You would have thought that would be enough for any aspiring university and that they would be flying banners all over the place. But TCD has been too busy lamenting its fall in the Times Higher Education  (THE) and QS world rankings, which it attributed to the reluctance of the government to give it as much money as it wanted. Inevitably, a high powered Rankings Steering Group headed by the Provost was formed to turn TCD around.

In September last year the Irish Times reported that the reason or part of the reason for the fall  in the THE world rankings was that incorrect data had been supplied.  The newspaper said that:

"The error is understood to have been spotted when the college – which ranked in 160th place last year – fell even further in this year’s rankings.
The data error – which sources insist was an innocent mistake – is likely to have adversely affected its ranking position both this year and last. "
I am wondering why "sources" were so keen to insist that it was an innocent mistake. Has someone been hinting that it might have been deliberate?

It now seems that the mistake was not just a misplaced decimal point. It was a decimal point moved six places to the left so that TCD reported a total income of 355 Euro, a research income of 111 Euro and 5 Euro income from industry instead of 355, 111, and 5 million respectively. I wonder what will happen to applications to the business school.

What is even more disturbing, although perhaps not entirely surprising, is that THE's game-changing auditors did not notice.


Sunday, March 19, 2017

The ten smartest university rankings in the world (or lists if you want to be pedantic)

Paul Greatrix at Wonk HE has just published a list of the ten dumbest rankings in the world. Some I would agree with but the choice of others seems a little odd. He objects to U-Multirank because it is expensive which is unfair when you consider the money that universities are spending on summits, consultancies, audits, ranking task forces and the like. I personally find the Webometrics methodology comprehensible although I admit that I am still not sure about exactly what a bad practice is.

Anyway, the dumbest rankings list should be supplemented with a list of the smartest rankings. Criteria for inclusion are innovative and imaginative methodology, inclusion of formerly marginalised institutions, groups or individuals, cutting edge insights, or significant social utility. They are not in order since they are all, like all rankings and all US liberal arts colleges, unique, some of them extremely so.



  • The Campus Squirrel Listings. "The quality of an institution of higher learning can often be determined by the size, health and behavior of the squirrel population on campus." Top of the charts with five acorns are Kansas State University, Rice University, Ursinus College, Lehigh University, Susquehanna University, and the US Naval Academy.
  • The Fortunate 500 University Rankings by the Higher School of Economics Moscow uses a brilliantly sophisticated methodology that is unbiased by exam results, teaching or research. Linkoping University in Sweden is number one.
  • Ben Sowter of QS has said that his favourite ranking is GreenMetrics because it is the only one in which his alma mater, the University of Nottingham, is top. Similarly, I am very fond of the Research Ranking of African Universities (sorry, dead link) in which my former employer, Umar ibn El-Kanemi College of Education, Science and Technology, Nigeria,  is ranked 988th.
  • The Times Higher Education World University Rankings and spin offs have  done wonderful work over the years in identifying unsuspected pockets of excellence. Last year they had Anglia Ruskin University in Cambridge equal to Oxford for research impact measured by citations and well ahead of that other place in Cambridge.
  • This tradition is continued in the 2017 Asian Universities Rankings which has discovered  that Veltech University is the third best university in India and the best in Asia for  research impact.
  • Princeton review's Stone Cold Sober Universities (staying off alcohol and drugs) is very predictable. Brigham Young University in Utah is always first and the higher rankings are filled with service academies and Christian schools. As long as the Air Force Academy stays in the top ten the world can sleep safely.
  • Three years ago Huffington Post published a list of the coldest colleges in the USA. Number one was not the university of Alaska but Minnesota State University.
  • There does not seem to be a formal ranking of universities that produce comedians but if there was then Cambridge, whose graduates include John Cleese, Peter Cook and Richard Ayoade, would surely be at the top. Oxford would obviously be the best for producing dancers.





Tuesday, February 28, 2017

Will Asia start rising again?

Times Higher Education (THE) has long suffered from the curse of field-normalised citations which without fail produce interesting (in the Chinese curse sense) results every year.

Part of THE's citation problem is the kilo-paper issue, papers mainly in particle physics with hundreds or thousands of authors and hundreds or thousands of citations. The best known case is 'Combined Measurement of the Higgs Boson Mass in pp Collisions    ...   ' in Physical Review Letters which has 5,154 contributors.

If every contributor to such papers is given equal credit for such citations then his or her institution would be awarded thousands of citations. Combined with other attributes of this indicator this means that a succession of improbable places, such as Tokyo Metropolitan University and Middle East Technical University,  have soared to the research impact peaks in the THE world rankings.

THE have already tried a couple of variations to counting citations for this sort of paper. In 2015 they introduced a cap, simply not counting any paper with more than a thousand authors. Then in 2016 they decided to give a minimum credit of 5% of citations to such authors.

That meant that in the 2014 THE world rankings an institution with one contributor to a paper with 2,000 authors and 2,000 citations would be counted as being cited 2,000 times, in 2015 not at all and in 2016 100 times. The result was that many universities in Japan, Korea, France and Turkey suffered catastrophic falls in 2015 and then made a modest comeback in 2016.

But there may be more to come. A paper by Louis de Mesnard in the European Journal of Operational Research  proposes a new formula -- (n+2)/3n -- so that if a paper has two authors each one gets two thirds of the credit. If it has 2,000 authors each one is assigned 334 citations.

Mesnard's paper has been given star billing in an article in THE which suggests that the magazine is thinking about using his formula in the next world rankings.

If so, we can expect headlines about the extraordinary recovery of Asian universities in contrast to the woes of the UK and the USA suffering from the ravages of Brexit and Trump-induced depression. 


Monday, February 27, 2017

Worth Reading 8

Henk F Moed, Sapienza University of Rome

A critical comparative analysis of five world university rankings



ABSTRACT
To provide users insight into the value and limits of world university rankings, a comparative analysis is conducted of 5 ranking systems: ARWU, Leiden, THE, QS and U-Multirank. It links these systems with one another at the level of individual institutions, and analyses the overlap in institutional coverage, geographical coverage, how indicators are calculated from raw data, the skewness of indicator distributions, and statistical correlations between indicators. Four secondary analyses are presented investigating national academic systems and selected pairs of indicators. It is argued that current systems are still one-dimensional in the sense that they provide finalized, seemingly unrelated indicator values rather than offering a data set and tools to observe patterns in multi-faceted data. By systematically comparing different systems, more insight is provided into how their institutional coverage, rating methods, the selection of indicators and their normalizations influence the ranking positions of given institutions.

" Discussion and conclusions

The overlap analysis clearly illustrates that there is no such set as ‘the’ top 100 universities in terms of excellence: it depends on the ranking system one uses which universities constitute the top 100. Only 35 institutions appear in the top 100 lists of all 5 systems, and the number of overlapping institutions per pair of systems ranges between 49 and 75. An implication is that national governments executing a science policy aimed to increase the number of academic institutions in the ‘top’ of the ranking of world universities, should not only indicate the range of the top segment (e.g., the top 100), but also specify which ranking(s) are used as a standard, and argue why these were selected from the wider pool of candidate world university rankings."



Scientometrics DOI 10.1007/s11192-016-2212-y 

Tuesday, February 21, 2017

Never mind the rankings, THE has a huge database



There has been a debate, or perhaps the beginnings of a debate, about international university rankings following the publication of Bahram Bekhradnia's report to the Higher Education Policy Institute with comments in University World News by Ben SowterPhil BatyFrank Ziegele and Frans van Vought  and Philip Altbach and Ellen Hazelkorn and a guest post by Bekhradnia in this blog.

Bekhradnia argued that global university rankings were damaging and dangerous because they encourage an obsession with research, rely on unreliable or subjective data, and emphasise spurious precision. He suggests that governments, universities and academics should just ignore the rankings.

Times Higher Education (THE) has now published a piece by THE rankings editor Phil Baty that does not really deal with the criticism but basically says that it does not matter very much because the THE database is bigger and better than anyone else's. This he claims is "the true purpose and enduring legacy" of the THE world rankings.

Legacy? Does this mean that THE is getting ready to abandon rankings, or maybe just the world rankings, and go exclusively into the data refining business? 

Whatever Baty is hinting at, if that is what he is doing, it does seem a rather insipid defence of the rankings to say that all the criticism is missing the point because they are the precursor to a big and sophisticated database.

The article begins with a quotation from Lydia Snover, Director of Institutional Research, at MIT:

“There is no world department of education,” says Lydia Snover, director of institutional research at the Massachusetts Institute of Technology. But Times Higher Education, she believes, is helping to fill that gap: “They are doing a real service to universities by developing definitions and data that can be used for comparison and understanding.”

This sounds as though THE is doing something very impressive that nobody else has even thought of doing. But Snover's elaboration of this point in an email gives equal billing to QS and THE as definition developers and suggests the definitions and data that they provide will improve and expand in the future, implying that they are now less than perfect. She says:

"QS and THE both collect data annually from a large number of international universities. For example, understanding who is considered to be “faculty” in the EU, China, Australia, etc.  is quite helpful to us when we want to compare our universities internationally.  Since both QS and THE are relatively new in the rankings business compared to US NEWS, their definitions are still evolving.  As we go forward, I am sure the amount of data they collect and the definitions of that data will expand and improve."

Snover, by the way , is a member of 
the QS advisory board, as is THE's former rankings  "masterclass" partner, Simon Pratt.

Baty offers a rather perfunctory defence of the THE rankings. He talks about rankings bringing great insights into the shifting fortunes of universities. If we are talking about year to year changes then the fact that THE purports to chart shifting fortunes is a very big bug in their methodology. Unless there has been drastic restructuring universities do not change much in a matter of months and any ranking that claims that it is detecting massive shifts over a year is simply advertising its deficiencies.

The assertion that the THE rankings are the most comprehensive and balanced is difficult to take seriously. If by comprehensive it is meant that the THE rankings have more indicators than QS or Webometrics that is correct. But the number of indicators does not mean very much if they are bundled together and the scores hidden from the public and if some of the indicators, the teaching survey and research survey for example, correlate so closely that they are effectively the same thing. In any case, The Russian Round University Rankings have 20 indicators compared with THE's 13 in the world rankings.

As for being balanced, we have already seen Bekhradnia's analysis showing that even the teaching and international outlook criteria in the THE rankings are really about research. In addition, THE gives almost a third of its weighting to citations. In practice that is often even more because the effect of the regional modification, now applied to half the indicator, is to boost in varying degrees the scores of everybody except those in the best performing country. 

After offering a scaled down celebration of the rankings, Baty then dismisses critics while announcing that THE "is quietly [seriously?] getting on with a hugely ambitious project to build an extraordinary and truly unique global resource." 


Perhaps some elite universities, like MIT, will find the database and its associated definitions helpful but whether there is anything extraordinary or unique about it remains to be seen.