Thursday, April 06, 2017

Doing Something About Citations and Affiliations

University rankings have proliferated over the last decade. The International Rankings Expert Group's (IREG) inventory of national rankings counted 60 and there are now 40 international rankings including global, regional, subject, business school and system rankings.

In addition, there have been  a variety of spin offs and extracts from the global rankings, especially those published by Times Higher Education, including Asian, Latin American, African, MENA, Young University rankings and most international universities. The value of these varies but that of the Asian rankings must now be considered especially suspect.

THE have just released the latest edition of their Asian rankings using the world rankings indicators with a recalibration of the weightings. They have reduced the weighting given to the teaching and research reputation surveys and increased that for research income, research productivity and income from industry. Unsurprisingly, Japanese universities, with good reputations but affected by budget cuts, have performed less well than in the world rankings.

These rankings have, as usual, produced some results that are rather counter intuitive and illustrate the need for THE, other rankers and the academic publishing industry to introduce some reforms in the presentation and counting of publications and citations.

As usual, the oddities in the THE Asian rankings have a lot to do with the research impact indicator supposedly measured by citations. This, it needs to be explained, does not simply count the number of citations but compares them with the world average for over three hundred fields, five years of publications and six years of citations. Added to all that is a "regional modification" applied to half of the indicator by which the score for each university is divided by the square root of the score for the country in which the university is located. This effectively gives a boost to everybody except those places in the top scoring country, one that can be quite significant for countries with a low citation impact.

What this means is that a university with  a minimal number of papers can rack up a large and disproportionate score if it can collect large numbers of citations for a relatively small number of papers. This appears to be what has contributed to the extraordinary success of the institution variously known as Vel Tech University, Veltech University, Veltech Dr. RR & Dr. SR University and Vel Tech Rangarajan Dr Sagunthala R & D Institute of Science and Technology.

The university has scored a few local achievements, most recently ranking 58th for engineering institutions in the latest Indian NRIF rankings, but internationally, as Ben Sowter indicated in Quora, it is way down the ladder or even unable to get onto the bottom rung.

So how did it get to be the third best university and best private university in India according to the THE Asian rankings? How could it have the highest research impact of any university in Chennai, Tamil Nadu, India and Asia and perhaps the highest or second highest in the world.

Ben Sowter of QS Intelligence Unit has provided the answer. It is basically due to industrial scale self-citation.

"Their score of 100 for citations places them as the topmost university in Asia for citations, more than 6 points clear of their nearest rival. This is an indicator weighted at 30%. Conversely, and very differently from other institutions in the top 10 for citations, with a score of just 8.4 for research, they come 285/298 listed institutions. So an obvious question emerges, how can one of the weakest universities in the list for research, be the best institution in the list for citations?
The simple answer? It can’t. This is an invalid result, which should have been picked up when the compilers undertook their quality assurance checks.
It’s technically not a mistake though, it has occurred as a result of the Times Higher Education methodology not excluding self-citations, and the institution appears to have, for either this or other purposes, undertaken a clear campaign to radically promote self-citations from 2015 onwards.
In other words and in my opinion, the university has deliberately and artificially manipulated their citation records, to cheat this or some other evaluation system that draws on them.
The Times Higher Education methodology page explains: The data include the 23,000 academic journals indexed by Elsevier’s Scopus database and all indexed publications between 2011 and 2015. Citations to these publications made in the six years from 2011 to 2016 are also collected.
So let’s take a look at the Scopus records for Vel Tech for those periods. There are 973 records in Scopus on the primary Vel Tech record for the period 2011–2015 (which may explain why Vel Tech have not featured in their world ranking which has a threshold of 1,000). Productivity has risen sharply through that period from 68 records in 2011 to 433 records in 2015 - for which due credit should be afforded.
The issue begins to present itself when we look at the citation picture. "
He continues:
 "That’s right. Of the 13,864 citations recorded for the main Vel Tech affiliation in the measured period 12,548 (90.5%) are self-citations!!
A self-citation is not, as some readers might imagine, one researcher at an institution citing another at their own institution, but that researcher citing their own previous research, and the only way to a group of researchers will behave that way collectively on this kind of scale so suddenly, is to have pursued a deliberate strategy to do so for some unclear and potentially nefarious purpose.
It’s not a big step further to identify some of the authors who are most clearly at the heart of this strategy by looking at the frequency of their occurence amongst the most cited papers for Vel Tech. Whilst this involves a number of researchers, at the heart of it seems to be Dr. Sundarapandian Vaidyanathan, Dean of the R&D Center.
Let’s take as an example, a single paper he published in 2015 entitled “A 3-D novel conservative chaotic system and its generalized projective synchronization via adaptive control”. Scopus lists 144 references, 19 of which appear to be his own prior publications. The paper has been cited 114 times, 112 times by himself in other work."

In addition, the non-self citations are from a very small number of people, including his co-authors. Basically his audience is himself and a small circle of friends.

Another point is that Dr Vaidyanathan has published in a limited of journals and conference proceedings the most important of which are the International Journal of Pharmtech Research and the International Journal of Chemtech Research, both of which have Vaidyanathan as an associate editor. My understanding of Scopus procedures for inclusion and retention in the database is that the number of citations is very important. I was once associated with a journal that was highly praised by the Scopus reviewers for the quality of its contents but rejected because it had few citations. I wonder if Scopus's criteria include watching out for self-citations.

The Editor in Chief of the International Journal of Chemtech Research is listed as Bhavik J Bhatt who received his Ph D from the University of Iowa in 2013 and does not appear to have ever held a full time university post.

The Editor in Chief of the International Journal of Pharmtech Research is Moklesur R Sarker, associate professor at Lincoln University College Malaysia, which in 2015 was reported to be in trouble for admitting bogus students.

I will be scrupulously fair and quote Dr Vaidyanathan.

"I joined Veltech University in 2009 as a Professor and shortly, I joined the Research and Development Centre at Veltech University. My recent research areas are chaos and control theory. I like to stress that research is a continuous process, and research done in one topic becomes a useful input to next topic and the next work cannot be carried on without referring to previous work. My recent research is an in-depth study and discovery of new chaotic and hyperchaotic systems, and my core research is done on chaos, control and applications of these areas. As per my Scopus record, I have published a total of 348 research documents. As per Scopus records, my work in chaos is ranked as No. 2, and ranked next to eminent Professor G. Chen. Also, as per Scopus records, my work in hyperchaos is ranked as No. 1, and I have contributed to around 50 new hyperchaotic systems. In Scopus records, I am also included in the list of peers who have contributed in control areas such as ‘Adaptive Control’, ‘Backstepping Control’, ‘Sliding Mode Control’ and ‘Memristors’. Thus, the Scopus record of my prolific research work gives ample evidence of my subject expertise in chaos and control. In this scenario, it is not correct for others to state that self-citation has been done for past few years with an intention of misleading others. I like to stress very categorically that the self-citations are not an intention of me or my University.         
I started research in chaos theory and control during the years 2010-2013. My visit to Tunisia as a General Chair and Plenary Speaker in CEIT-2013 Control Conference was a turning point in my research career. I met many researchers in control systems engineering and I actively started my research collaborations with foreign faculty around the world. From 2013-2016, I have developed many new results in chaos theory such as new chaotic systems, new hyperchaotic systems, their applications in various fields, and I have also published several papers in control techniques such as adaptive control, backstepping control, sliding mode control etc. Recently, I am also actively involved in new areas such as fractional-order chaotic systems, memristors, memristive devices, etc."
...
"Regarding citations, I cite the recent developments like the discovery of new chaotic and hyperchaotic systems, recent applications of these systems in various fields like physics, chemistry, biology, population ecology, neurology, neural networks, mechanics, robotics, chaos masking, encryption, and also various control techniques such as active control, adaptive control, backstepping control, fuzzy logic control, sliding mode control, passive control, etc,, and these recent developments include my works also."


His claim that self citation was not his intention is odd. Was he citing in his sleep or was he possessed by an evil spirit when he wrote his papers or signed off on them? The claim about citing recent developments that include his own work misses the point. Certainly somebody like Chomsky would cite himself when reviewing developments in formal linguistics but he would also be cited by other people. Aside from himself and his co-authors Dr Vaidyanathan is cited by almost nobody.

The problems with the citations indicator in the THE Asian rankings do not end there. Here are a few cases of universities with very low scores for research and unbelievably high scores for research impact

King Abdulaziz University is ranked second in Asia for research impact. This is an old story and it is achieved by the massive recruitment of adjunct faculty culled from the lists of highly cited researchers.

Toyota Technological Institute is supposedly best in Japan for research impact, which I suspect would be news to most Japanese academics, but 19th for research.

Atilim University in Ankara is supposedly the best in Turkey for research impact but also has a very low score for research.

The high citations score for Quaid i Azam University in Pakistan results from participation in the multi-author physics papers derived from the CERN projects. In addition, there is one hyper productive researcher in applied mathematics.

Tokyo Metropolitan University gets a high score for citation because of a few much cited papers in physics and molecular genetics.

Bilkent university is a contributor to frequently cited multi-author papers in genetics.

According to THE Universiti Tunku Abdul Rahman (UTAR) is the second best university in Malaysia and best for research impact, something that will come as a surprise to anyone with the slightest knowledge of Malaysian higher education. This is because of participation in the global burden of disease study, whose papers propelled Anglia Ruskin University to the apex of British research. Other universities with disproportionate scores for research impact include Soochow University  China, North East Normal University China, Jordan University of Science and Technology, Panjab University India, Comsats Institute of Information Technology Pakistan and Yokohama City University Japan.

There are some things that the ranking and academic publishing industries need to do about the collection, presentation and distribution of publications and citations data.


1.  All rankers should exclude self- citations from citation counts. This is very easy to do, just clicking a box, and has been done by QS since 2011. It would be even better if intra-university and intra-journal citations were excluded as well.

2.  There will almost certainly be a growing problem with the recruitment of adjunct staff who will be asked to do no more than list an institution as a secondary affiliation when publishing papers. It would be sensible if academic publishers simply insisted that there be only one affiliation per author. If they do not it should be possible for rankers to count only the first named author.

3.  The more fields there are the greater the chances that rankings can be skewed by strategically or accidentally placed citations. The number of fields used for normalisation should be kept to a reasonable number.

4. A visit to the Leiden Ranking website and a few minutes tinkering with their settings and parameters will show that citations can be used to measure several different things. Rankers should use more than one indicator to measure citations.

5. It defies common sense for any ranking to give a greater weight to citations than to publications. Rankers need to review the weighting given to their citation indicators. In particular,  THE needs to think about their regional modification. which has the effect, noted above, of increasing the citations score for nearly everybody and so pushing the actual weighting of the indicator above 30 per cent.

6. Academic publishers and databases like Scopus and Web of Science need to audit journals on a regular basis.



Tuesday, April 04, 2017

The Trinity Affair Gets Worse


Trinity College Dublin (TCD) has been doing extremely well over the last few years, especially in research. It has risen in the Shanghai ARWU rankings from the 201-300 to the 151-200 band and from 174th to 102nd  in the RUR rankings.

You would have thought that would be enough for any aspiring university and that they would be flying banners all over the place. But TCD has been too busy lamenting its fall in the Times Higher Education  (THE) and QS world rankings, which it attributed to the reluctance of the government to give it as much money as it wanted. Inevitably, a high powered Rankings Steering Group headed by the Provost was formed to turn TCD around.

In September last year the Irish Times reported that the reason or part of the reason for the fall  in the THE world rankings was that incorrect data had been supplied.  The newspaper said that:

"The error is understood to have been spotted when the college – which ranked in 160th place last year – fell even further in this year’s rankings.
The data error – which sources insist was an innocent mistake – is likely to have adversely affected its ranking position both this year and last. "
I am wondering why "sources" were so keen to insist that it was an innocent mistake. Has someone been hinting that it might have been deliberate?

It now seems that the mistake was not just a misplaced decimal point. It was a decimal point moved six places to the left so that TCD reported a total income of 355 Euro, a research income of 111 Euro and 5 Euro income from industry instead of 355, 111, and 5 million respectively. I wonder what will happen to applications to the business school.

What is even more disturbing, although perhaps not entirely surprising, is that THE's game-changing auditors did not notice.


Sunday, March 19, 2017

The ten smartest university rankings in the world (or lists if you want to be pedantic)

Paul Greatrix at Wonk HE has just published a list of the ten dumbest rankings in the world. Some I would agree with but the choice of others seems a little odd. He objects to U-Multirank because it is expensive which is unfair when you consider the money that universities are spending on summits, consultancies, audits, ranking task forces and the like. I personally find the Webometrics methodology comprehensible although I admit that I am still not sure about exactly what a bad practice is.

Anyway, the dumbest rankings list should be supplemented with a list of the smartest rankings. Criteria for inclusion are innovative and imaginative methodology, inclusion of formerly marginalised institutions, groups or individuals, cutting edge insights, or significant social utility. They are not in order since they are all, like all rankings and all US liberal arts colleges, unique, some of them extremely so.



  • The Campus Squirrel Listings. "The quality of an institution of higher learning can often be determined by the size, health and behavior of the squirrel population on campus." Top of the charts with five acorns are Kansas State University, Rice University, Ursinus College, Lehigh University, Susquehanna University, and the US Naval Academy.
  • The Fortunate 500 University Rankings by the Higher School of Economics Moscow uses a brilliantly sophisticated methodology that is unbiased by exam results, teaching or research. Linkoping University in Sweden is number one.
  • Ben Sowter of QS has said that his favourite ranking is GreenMetrics because it is the only one in which his alma mater, the University of Nottingham, is top. Similarly, I am very fond of the Research Ranking of African Universities (sorry, dead link) in which my former employer, Umar ibn El-Kanemi College of Education, Science and Technology, Nigeria,  is ranked 988th.
  • The Times Higher Education World University Rankings and spin offs have  done wonderful work over the years in identifying unsuspected pockets of excellence. Last year they had Anglia Ruskin University in Cambridge equal to Oxford for research impact measured by citations and well ahead of that other place in Cambridge.
  • This tradition is continued in the 2017 Asian Universities Rankings which has discovered  that Veltech University is the third best university in India and the best in Asia for  research impact.
  • Princeton review's Stone Cold Sober Universities (staying off alcohol and drugs) is very predictable. Brigham Young University in Utah is always first and the higher rankings are filled with service academies and Christian schools. As long as the Air Force Academy stays in the top ten the world can sleep safely.
  • Three years ago Huffington Post published a list of the coldest colleges in the USA. Number one was not the university of Alaska but Minnesota State University.
  • There does not seem to be a formal ranking of universities that produce comedians but if there was then Cambridge, whose graduates include John Cleese, Peter Cook and Richard Ayoade, would surely be at the top. Oxford would obviously be the best for producing dancers.





Tuesday, February 28, 2017

Will Asia start rising again?

Times Higher Education (THE) has long suffered from the curse of field-normalised citations which without fail produce interesting (in the Chinese curse sense) results every year.

Part of THE's citation problem is the kilo-paper issue, papers mainly in particle physics with hundreds or thousands of authors and hundreds or thousands of citations. The best known case is 'Combined Measurement of the Higgs Boson Mass in pp Collisions    ...   ' in Physical Review Letters which has 5,154 contributors.

If every contributor to such papers is given equal credit for such citations then his or her institution would be awarded thousands of citations. Combined with other attributes of this indicator this means that a succession of improbable places, such as Tokyo Metropolitan University and Middle East Technical University,  have soared to the research impact peaks in the THE world rankings.

THE have already tried a couple of variations to counting citations for this sort of paper. In 2015 they introduced a cap, simply not counting any paper with more than a thousand authors. Then in 2016 they decided to give a minimum credit of 5% of citations to such authors.

That meant that in the 2014 THE world rankings an institution with one contributor to a paper with 2,000 authors and 2,000 citations would be counted as being cited 2,000 times, in 2015 not at all and in 2016 100 times. The result was that many universities in Japan, Korea, France and Turkey suffered catastrophic falls in 2015 and then made a modest comeback in 2016.

But there may be more to come. A paper by Louis de Mesnard in the European Journal of Operational Research  proposes a new formula -- (n+2)/3n -- so that if a paper has two authors each one gets two thirds of the credit. If it has 2,000 authors each one is assigned 334 citations.

Mesnard's paper has been given star billing in an article in THE which suggests that the magazine is thinking about using his formula in the next world rankings.

If so, we can expect headlines about the extraordinary recovery of Asian universities in contrast to the woes of the UK and the USA suffering from the ravages of Brexit and Trump-induced depression. 


Monday, February 27, 2017

Worth Reading 8

Henk F Moed, Sapienza University of Rome

A critical comparative analysis of five world university rankings



ABSTRACT
To provide users insight into the value and limits of world university rankings, a comparative analysis is conducted of 5 ranking systems: ARWU, Leiden, THE, QS and U-Multirank. It links these systems with one another at the level of individual institutions, and analyses the overlap in institutional coverage, geographical coverage, how indicators are calculated from raw data, the skewness of indicator distributions, and statistical correlations between indicators. Four secondary analyses are presented investigating national academic systems and selected pairs of indicators. It is argued that current systems are still one-dimensional in the sense that they provide finalized, seemingly unrelated indicator values rather than offering a data set and tools to observe patterns in multi-faceted data. By systematically comparing different systems, more insight is provided into how their institutional coverage, rating methods, the selection of indicators and their normalizations influence the ranking positions of given institutions.

" Discussion and conclusions

The overlap analysis clearly illustrates that there is no such set as ‘the’ top 100 universities in terms of excellence: it depends on the ranking system one uses which universities constitute the top 100. Only 35 institutions appear in the top 100 lists of all 5 systems, and the number of overlapping institutions per pair of systems ranges between 49 and 75. An implication is that national governments executing a science policy aimed to increase the number of academic institutions in the ‘top’ of the ranking of world universities, should not only indicate the range of the top segment (e.g., the top 100), but also specify which ranking(s) are used as a standard, and argue why these were selected from the wider pool of candidate world university rankings."



Scientometrics DOI 10.1007/s11192-016-2212-y 

Tuesday, February 21, 2017

Never mind the rankings, THE has a huge database



There has been a debate, or perhaps the beginnings of a debate, about international university rankings following the publication of Bahram Bekhradnia's report to the Higher Education Policy Institute with comments in University World News by Ben SowterPhil BatyFrank Ziegele and Frans van Vought  and Philip Altbach and Ellen Hazelkorn and a guest post by Bekhradnia in this blog.

Bekhradnia argued that global university rankings were damaging and dangerous because they encourage an obsession with research, rely on unreliable or subjective data, and emphasise spurious precision. He suggests that governments, universities and academics should just ignore the rankings.

Times Higher Education (THE) has now published a piece by THE rankings editor Phil Baty that does not really deal with the criticism but basically says that it does not matter very much because the THE database is bigger and better than anyone else's. This he claims is "the true purpose and enduring legacy" of the THE world rankings.

Legacy? Does this mean that THE is getting ready to abandon rankings, or maybe just the world rankings, and go exclusively into the data refining business? 

Whatever Baty is hinting at, if that is what he is doing, it does seem a rather insipid defence of the rankings to say that all the criticism is missing the point because they are the precursor to a big and sophisticated database.

The article begins with a quotation from Lydia Snover, Director of Institutional Research, at MIT:

“There is no world department of education,” says Lydia Snover, director of institutional research at the Massachusetts Institute of Technology. But Times Higher Education, she believes, is helping to fill that gap: “They are doing a real service to universities by developing definitions and data that can be used for comparison and understanding.”

This sounds as though THE is doing something very impressive that nobody else has even thought of doing. But Snover's elaboration of this point in an email gives equal billing to QS and THE as definition developers and suggests the definitions and data that they provide will improve and expand in the future, implying that they are now less than perfect. She says:

"QS and THE both collect data annually from a large number of international universities. For example, understanding who is considered to be “faculty” in the EU, China, Australia, etc.  is quite helpful to us when we want to compare our universities internationally.  Since both QS and THE are relatively new in the rankings business compared to US NEWS, their definitions are still evolving.  As we go forward, I am sure the amount of data they collect and the definitions of that data will expand and improve."

Snover, by the way , is a member of 
the QS advisory board, as is THE's former rankings  "masterclass" partner, Simon Pratt.

Baty offers a rather perfunctory defence of the THE rankings. He talks about rankings bringing great insights into the shifting fortunes of universities. If we are talking about year to year changes then the fact that THE purports to chart shifting fortunes is a very big bug in their methodology. Unless there has been drastic restructuring universities do not change much in a matter of months and any ranking that claims that it is detecting massive shifts over a year is simply advertising its deficiencies.

The assertion that the THE rankings are the most comprehensive and balanced is difficult to take seriously. If by comprehensive it is meant that the THE rankings have more indicators than QS or Webometrics that is correct. But the number of indicators does not mean very much if they are bundled together and the scores hidden from the public and if some of the indicators, the teaching survey and research survey for example, correlate so closely that they are effectively the same thing. In any case, The Russian Round University Rankings have 20 indicators compared with THE's 13 in the world rankings.

As for being balanced, we have already seen Bekhradnia's analysis showing that even the teaching and international outlook criteria in the THE rankings are really about research. In addition, THE gives almost a third of its weighting to citations. In practice that is often even more because the effect of the regional modification, now applied to half the indicator, is to boost in varying degrees the scores of everybody except those in the best performing country. 

After offering a scaled down celebration of the rankings, Baty then dismisses critics while announcing that THE "is quietly [seriously?] getting on with a hugely ambitious project to build an extraordinary and truly unique global resource." 


Perhaps some elite universities, like MIT, will find the database and its associated definitions helpful but whether there is anything extraordinary or unique about it remains to be seen.







Saturday, February 18, 2017

Searching for the Gold Standard: The Times Higher Education World University Rankings, 2010-2014


Now available at the Asian Journal of University Education. The paper has, of course, already been outdated by subsequent developments in the world of university rankings


ABSTRACT

This paper analyses the global university rankings introduced by Times Higher Education (THE) in partnership with Thomson Reuters in 2010 after the magazine ended its association with its former data provider Quacquarelli Symonds. The distinctive features of the new rankings included a new procedure for determining the choice and weighting of the various indicators, new criteria for inclusion in and exclusion from the rankings, a revised academic reputation survey, the introduction of an indicator that attempted to measure innovation, the addition of a third measure of internationalization, the use of several indicators related to teaching, the bundling of indicators into groups, and most significantly, the employment of a very distinctive measure of research impact with an unprecedentedly large weighting. The rankings met with little enthusiasm in 2010 but by 2014 were regarded with some favour by administrators and policy makers despite the reservations and criticisms of informed observers and the unusual scores produced by the citations indicator. In 2014, THE announced that the partnership would come to an end and that the magazine would collect its own data. There were some changes in 2015 but the basic structure established in 2010 and 2011 remained intact.


Saturday, February 11, 2017

What was the greatest ranking insight of 2016?

It is now difficult to imagine a world without university rankings. If they did not exist we would have to make judgements and decisions based on the self-serving announcements of bureaucrats and politicians, reputations derived from the achievements of past decades and popular and elite prejudices.

Rankings sometimes tell us things that are worth hearing. The first edition of the Shanghai rankings revealed emphatically that venerable European universities such as Bologna, the Sorbonne and Heidelberg were lagging behind their Anglo-Saxon competitors. More recently, the rise of research based universities in South Korea and Hong Kong and the relative stagnation of Japan has been documented by global rankings. The Shanghai ARWU also show the steady decline in the relative research capacity of a variety of US institutions including Wake Forest University, Dartmouth College, Wayne State University, the University of Oregon and Washington State University .

International university rankings have developed a lot in recent years and, with their large databases and sophisticated methodology, they can now provide us with an expanding wealth of "great insights into the strengths and shifting fortunes" of major universities.

So what was the greatest ranking insight of 2016?  Here are the first three on my shortlist. I hope to add a few more over the next couple of weeks. If anybody has suggestions I would be happy to publish them.

One. Cambridge University isn't even the best research university in Cambridge.
You may have thought that Cambridge University was one of the best research universities in the UK or Europe, perhaps even the best. But when it comes to research impact, as measured by field and year normalised citations with a 50% regional modification it isn't even the best in Cambridge. That honour, according to THE goes to Anglia Ruskin University, a former art school. Even more remarkable is that this achievement was due to the work of a single researcher. I shall keep the name a secret  in case his or her office becomes a stopping point for bus tours.

Two. The University of Buenos Aires and the Pontifical Catholic University of Chile rival the top European, American and Australian universities for graduate employability. 
The top universities for graduate employability according to the Quacquarelli Symonds (QS) employer survey are pretty obvious: Harvard, Oxford, Cambridge, MIT, Stanford. But it seems that there are quite a few Latin American universities in the world top 100 for employability. The University of Buenos Aires is 25th and the Pontifical Catholic University of Chile 28th in last year's QS world rankings employer survey indicator. Melbourne is 23rd, ETH 26th, Princeton 32nd and New York University 36th.

Three. King Abdulaziz University is one of the world's  leading universities for engineering.
The conventional wisdom seems settled, pick three or four from MIT, Harvard, Stanford, Berkeley, perhaps even a  star rising in the East like Tsinghua or the National University of Singapore. But in the Shanghai field rankings for Engineering last year the fifth place went to King Abdulaziz University in Jeddah. For highly cited researchers in engineering it is second in the world surpassed only by Stanford. 


Monday, February 06, 2017

Is Trump keeping out the best and the brightest?


One of the strange things among several about the legal challenge to Trump's executive order on refugees and immigration is the  claim in an amicus brief by dozens of companies, many of them at the cutting edge of the high tech economy,  that the order makes it hard to "recruit hire and retain some of the world's best employees." The proposed, now frozen, restrictions would, moreover, be a "barrier to innovation" and prevent companies from attracting "great talent." They point out that many Nobel prize winners are immigrants.

Note that these are "tech giants", not meat packers or farmers and that they are talking about the great and the best employees, not the good or adequate or possibly employable after a decade of ESL classes and community college.

So let us take a look at the seven countries included in the proposed restrictions. Are they likely to be the source of large numbers of future hi tech entrepreneurs, Nobel laureates and innovators?

The answer is almost certainly no. None of the Nobel prize winners (not counting Peace and Literature) so far have been born in Yemen, Iraq, Iran, Somalia, Sudan, Libya or Syria although there has been an Iranian born winner of the Fields medal for mathematics.

The general level of the higher educational system in these countries does not inspire confidence that they are bursting with great talent. Of the seven only Iran has any universities in the Shanghai rankings, the University of Tehran and Amirkabir University of Technology.

The Shanghai rankings are famously selectively so take a look at the rank of the top universities in the Webometrics rankings which are the most inclusive, ranking more than 12,000 institutions this year.

The position of the top universities from the seven countries is as follows:

University of Babylon, Iraq 2,654
University of Benghazi, Libya  3,638
Kismayo University, Somalia 5,725
University of Khartoum, Sudan   1,972
Yemeni University of Science and Technology 3,681
Tehran University of Medical Science, Iran 478
Damascus Higher Institute of Applied Science and Technology, Syria 3,757.

It looks as though the only country remotely capable of producing innovators, entrepreneurs and scientists is Iran.

Finally, let's look at the scores of students from these countries n the GRE verbal and quantitative tests 2011-12.

For verbal reasoning, Iran has a score of  141.3, Sudan 140.6, Syria 142.7, Yemen 141, Iraq 139.2, and Libya 137.1. The mean score is 150.8 with a standard deviation of 8.4.

For quantitative reasoning,  Iran has a score of 157.5, equal to France, Sudan 148.5, Syria 152.7, Yemen 148.6, Iraq 146.4, Libya 145.5. The mean score is 151.4 with a standard deviation of 8.7.

It seems that of the seven countries only Iran is likely to produce any significant numbers of workers capable of contributing to a modern economy.

No doubt there are other reasons why Apple, Microsoft and Twitter should be concerned about Trump's executive order. Perhaps they are worried about Russia, China, Korea or Poland being added to the restricted list. Perhaps they are thinking about farmers whose crops will rot in the fields, ESL teachers with nothing to do or social workers and immigration lawyers rotting at their desks. But if they really do believe that Silicon Valley will suffer irreparable harm from the proposed restrictions then they are surely mistaken.


Sunday, February 05, 2017

Guest post by Bahram Bekhradnia

I have just received this reply from Bahram Bekhradnia, President of the Higher Education Policy institute, in response to my review of his report on global university rankings.

My main two points which I think are not reflected in your blog – no doubt because I was not sufficiently clear – are
·       First the international rankings – with the exception of U-multirank which has other issues – almost  exclusively reflect research activity and performance. Citations and publications of course are explicitly concerned with research, and as you say “International faculty are probably recruited more for their research reputation than for anything else. Income from industry (THE) is of course a measure of reported funding for applied research. The QS academic reputation survey is officially about research and THE 's academic reputation survey of teaching is about postgraduate supervision.”  And I add (see below) that faculty to student ratios reflect research activity and are not an indicator of a focus on education.  There is not much argument that indicators of research dominate the rankings.
Yet although they ignore pretty well all other aspects of universities’ activities they claim nevertheless to identify the "best universities". They certainly do not provide information that is useful to undergraduate students, nor even actually to postgraduate students whose interest will be at discipline not institution level. If they were also honest enough to say simply that they identify research performance there would be rather less objection to the international rankings.

That is why it is so damaging for universities, their governing bodies – and even Governments – to pay so much attention to improving their universities performance in the international rankings.  Resources – time and money - are limited and attaching priority to improving research performance can only be right for a very small number of universities.

·     Second, the data on which they are based are wholly inadequate.  50% of the QS and 30% of the Times Higher rankings are based on nothing more than surveys of "opinion”, including in the case of QS the opinions of dead respondents.  But no less serious is that the data on which the rankings are based – other than the publications and prize related data – are supplied by universities themselves and unaudited, or are ‘scraped’ from a variety of other sources including universities websites’ and cannot be compared one with the other. Those are the reasons for the Trinity College Dublin and Sultan Qaboos fiascos.  One UAE university told me recently they had (mistakenly) submitted information about external income in UAE Dirhams instead of US Dollars – an inflation of 350% that no-one had noticed.  Who knows what other errors there may be – the ranking bodies certainly don’t. 
In reply to some of the detailed points that you make
In order to compare institutions you need to be sure that the data relating to each are compiled on a comparable basis, using comparable definitions et cetera. That is why the ranking bodies, rightly, have produced their own data definitions to which they ask institutions to adhere when returning data. The problem of course is that there is no audit of the data that are returned by institutions to ensure that the definitions are adhered to or that the data are accurate.  Incidentally, that is why also there is far less objection to national rankings, which can, if there are robust national data collection and audit arrangements, have fewer problems with regard to comparability of data.
But at least there is the attempt with institution-supplied data to ensure that they are on a common basis and comparable.  That is not so with data ‘scraped’ from random sources, and that is why I say that data scraping is such a bad practice.  It produces data which are not comparable, but which QS nevertheless uses to compare institutions. 
You say that THE, at least, omit faculty on research only contracts when compiling faculty to student ratios.  But when I say that FSRs are a measure of research activity I am not referring to research only faculty.  What I am pointing out is that the more research a university does the more academic faculty it is likely to recruit on teaching and research contracts.  These will inflate the faculty to student ratios without necessarily increasing the teaching capacity over a university that does less research, consequently has fewer faculty but whose faculty devote more of their time to teaching.  And of course QS even includes research contract faculty in FSR calculations.  FSRs are essentially a reflection of research activity. 

Monday, January 30, 2017

Getting satisfaction: look for universities that require good A level grades

If you are applying to a British university and you are concerned not with personal transformation, changing your life or social justice activism but with simple things like enjoying your  course and finishing it and getting a job what would you look for? Performance in global rankings? Staff salaries? Spending? Staff student ratios?

Starting with student satisfaction, here are a few basic correlations between scores for overall student satisfaction on the Guardian UK rankings and a number of variables from the Guardian rankings, the Times Higher Education TEF simulation (THE), the Hefce survey of educational qualifications, and theTHE survey of vice-chancellor's pay.


Average Entry Tariff (Guardian)   .479**
Staff student ratio (Guardian) .451**
Research Excellence Framework score (via THE)  .379**
Spending per student (Guardian)  .220 *
Vice chancellor salary (via THE) .167
Average salary (via THE) .031
Total staff (via THE) .099
Total students (via THE) .065
Teaching qualifications (Hefce)  -161 (English universities only)

If there is one single thing that best predicts how satisfied you will be it is average entry tariff (A level grades). The number of staff compared to students, REF score, and spending per student also correlate significantly with student satisfaction.

None of the following are of any use in predicting student satisfaction: vice chancellor salary, average staff salary,  total staff,  total students or percentage of faculty with teaching qualifications.






i

Thursday, January 26, 2017

Comments on the HEPI Report

The higher education industry tends to respond to global rankings in two ways. University bureaucrats and academics either get overexcited, celebrating when they are up, wallowing in self-pity when down, or they reject the idea of rankings altogether.

Bahram Bekhradnia of the Higher Education Policy Institute in the UK has  published a report on international rankings which adopts the second option. University World News has several comments including a summary of the report by Bekhradnia.

To start off, his choice of rankings deserves comment. He refers to the "four main rankings", Academic Ranking of World Universities (ARWU) from Shanghai, Quacquarelli Symonds (QS), Times Higher Education (THE) and U- Multirank. It is true that the first three are those best known to the public, QS and Shanghai by virtue of their longevity and THE because of its skilled branding and assiduous cultivation of the great, the good and the greedy of the academic world. U- Multirank is chosen presumably because of its attempts to address, perhaps not very successfully, some of the issues that the author discusses.

But focusing on these four gives a misleading picture of the current global ranking scene. There are now several rankings that are mainly concerned with research -- Leiden, Scimago, URAP, National Taiwan University, US News -- and redress some of the problems with the Shanghai ranking by giving due weight to the social sciences and humanities, leaving out decades old Nobel and Fields laureates and including more rigorous markers of quality. In addition, there are rankings that measure web activity, environmental sustainability, employability and innovation. Admittedly, they do not do any of these very well but the attempts should at least be noted and they could perhaps lead to better things.

In particular, there is now an international ranking from Russia, Round University Ranking (RUR), which could be regarded as an improved version of the THE world rankings and which tries to give more weight to teaching, It uses almost the same array of metrics as THE plus some more but with rational and sensible weightings, 8% for field normalised citations, for example, rather than 30%.

Bekhradnia has several comments on the defects of current rankings. First, he says that they are concerned entirely or almost entirely with research. He claims that there are indicators in the QS and THE rankings that are actually although not explicitly about research. International faculty are probably recruited more for their research reputation than for anything else. Income from industry (THE) is of course a measure of reported funding for applied research. The QS academic reputation survey is officially about research and THE 's academic reputation survey of teaching is about postgraduate supervision.

Bekhradnia is being a little unfair to THE. He asserts that if universities add to their faculty with research-only staff this will add to their faculty student metric, supposedly a proxy for teaching quality, thus turning the indicator into a measure of research. This is true of QS but it appears that THE does require universities to list research staff separately and excludes them from some indicators as appropriate. In any case, the number of research only staff is quite small outside the top hundred or so for most universities 

It is true that most rankings are heavily, perhaps excessively, research-orientated but it would be a mistake to conclude that this renders them totally useless for evaluating teaching and learning. Other things being equal, a good record for research is likely to be associated with positive student and graduate outcomes such as satisfaction with teaching, completion of courses and employment.

For English universities the Research Excellence Framework (REF) score is more predictive of student success and satisfaction, according to indicators in the Guardian rankings and the recent THE Teaching Excellence Framework simulation than the percentage of staff with educational training or certification, faculty salaries or institutional spending, although it is matched by staff student ratio.

If you are applying to English universities and you want to know how likely you are to complete your course or be employed after graduation, probably the most useful things to  know are average entry tariff (A levels), staff student ratio and faculty scores for the latest REF. There are of course intervening variables and the arrows of causation do not always fly in the same direction but scores for research indicators are not irrelevant to comparisons of teaching effectiveness and learning outcomes.

Next, the report deals with the issue of data, noting that internal data checks by THE and QS do not seem to be adequate. He refers to the case of Trinity College Dublin where a misplaced decimal point caused the university to drop several places in the THE word rankings. He then goes on to criticise QS for "data scraping" that is getting information from any available source. He notes that they caused Sultan Qaboos University (SQU) to drop 150 places in their world rankings apparently because QS took data from the SQU website that identified non teaching staff as teaching. I assume that the staff in question were administrators: if they were researchers then it would not have made any difference.

Bekhradnia is correct to point out that data from web sites is often incorrect or subject to misinterpretation. But to assume that such data is necessarily inferior to that reported by institutions to the rankers is debatable. QS has no need to be apologetic about resorting to data scraping. On balance information about universities is more likely to be correct if it comes from one of several and similar competitive sources, if it is from a source independent of the ranking organisation and the university, if it has been collected for reasons other than submission to the rankers, or if there are serious penalties for submitting incorrect data.

The best data for university evaluation and comparison is likely to be from third party databases that collect masses of information or from government agencies that require accuracy and honesty. After that institutional data from web sites and the like is unlikely to be significantly worse  than that specifically submitted for ranking purposes.

There was an article in University World News in which Ben Sowter of QS took a rather defensive position with regard to data scraping. He need not have done so. In fact it would not be a bad idea for QS and others to do a bit more.

Bekhradnia goes on to criticise the reputation surveys. He notes that recycling unchanged responses over a period of five years, originally three, means that it is possible that QS is counting the votes of dead or retired academics. He also points out that the response rate to the surveys is very low. All this is correct although it is nothing new. But it should be pointed out that what is significant is not how many respondents there are but how representative they are of the group that is being investigated. The weighting given to surveys in the THE and QS rankings is clearly too much and QS's  methods of selecting respondents are rather incoherent and can produce counter-intuitive results such as extremely high scores for some Asian and Latin American universities.

However, it is going too far to suggest that surveys should have no role. First reputation and perceptions are far from insignificant. Many students would, I suspect, prefer to go a university that is overrated by employers and professional schools than to one that provides excellent instruction and facilities but has failed to communicate this to the rest of the world.

In addition, surveys can provide a reality check when a university does a bit of gaming. For example King Abdulaziz University (KAU) has been diligently offering adjunct contracts to dozens of highly cited researchers around the world that require them to put the university as a secondary affiliation and thus allow it to get huge numbers of citations. The US News Arab Region rankings have KAU in the top five among Arab universities for a range of research indicators, publications, cited publications, citations, field weighted citation impact, publications in the top 10 % and the top 25%. But its academic reputation rank was only 26, definitely a big thumbs down.

Bekhradnia then refers to the advantage that universities get in the ARWU rankings simply by being big. This is certainly a valid point. However, it could be argued that quantity is a necessary prerequisite to quality and enables the achievement of economies of scale. 

He also suggest that the practice of presenting lists in order is  misleading since a trivial difference in the raw data could mean a substantial difference in the presented ranking. He proposes that it would be better to group universities into bands. The problem with this is that when rankers do resort to banding, it is fairly easy to calculate an overall score by adding up the published components. Bloggers and analysts do it all the time.

Bekhradnia concludes:
"The international surveys of reputation should be dropped
– methodologically they are flawed, effectively they only
measure research performance and they skew the results in
favour of a small number of institutions."

This is ultimately self defeating. The need and the demand for some sort of  ranking is too widespread to set aside. Abandon explicit rankings and we will probably have implicit rankings of recommendations by self declared experts.

There is much to be done to make rankings better. The priority should be finding objective and internationally comparable measures of student attributes and attainment. That will be some distance in the future. For the moment what universities should be doing is to focus not on composite rankings but on the more reasonable and reliable indicators within specific rankings. 

Bekhradnia does have a very good point at the end:

"Finally, universities and governments should discount therankings when deciding their priorities, policies and actions.In particular, governing bodies should resist holding seniormanagement to account for performance in flawed rankings.Institutions and governments should do what they do becauseit is right, not because it will improve their position in therankings."

I would add that universities should stop celebrating when they do well in the rankings. The grim fate of Middle East Technical University should be present in the mind of every university head.







Sunday, January 22, 2017

What's Wrong with Ottawa?

The University of Ottawa (UO) has been a great success over the last few years, especially in research. In 2004 it was around the bottom third of the 202-300 band in the Shanghai Academic Ranking of World Universities. By 2016 it had reached the 201st place, although the Shanghai rankers still recorded it as being in the 201-300 band. Another signing of a highly cited researcher, another paper in Nature, a dozen more papers listed in the Science Citation Index and it would have made a big splash by breaking into the Shanghai top 200.

The Shanghai rankings have, apart from recent problems with the Highly Cited Researchers indicator, maintained a stable methodology so this is a very solid and remarkable achievement.

A look at the individual components of these rankings shows that UO has improved steadily in the the quantity and the quality of research. The score for publications rose from 37.8 to 44.4 between 2004 and 2016, from 13.0 to 16.1 for papers in Nature and Science, and from 8.7 to 14.5 for highly cited researchers (Harvard is 100 in all cases). For productivity (five indicators divided by number of faculty) the score went from 13.2 to 21.5 (Caltech is 100).

It is well known that the Shanghai rankings are entirely about research and ignore the arts and humanities. The Russian Round University Rankings (RUR), however, get their data from the same source as THE did until two years ago, include data from the arts and humanities, and have a greater emphasis on teaching related indicators.

In the RUR rankings, UO rose from 263rd place in 2010 to 211th overall in 2015, from 384th to 378th in five combined teaching indicators and from 177th to 142nd in five combined research indicators. Ottawa is doing well for research and and creeping up a bit for teaching related criteria, although the relationship between these and actual teaching may be rather tenuous.

RUR did not rank UO in 2016. I cannot find any specific reason  but it is possible that the university did not submit data for the Institutional Profiles at Research Analytics.

Just for completeness, Ottawa is also doing well in the Webometrics ranking, which is mainly about web activity but does include a measure of research excellence. It is in the 201st spot there also.

It seems, however, that this is not good enough. In September, according to Fulcrum, the university newspaper, there was a meeting of the Board of Governors which discussed not the good results from RUR, Shanghai Ranking and Webometrics. but a fall in the Times Higher Education (THE) World University Rankings from the 201-250 band in 2015-16 to the 250-300 band in 2016-17. One board member even suggested taking THE to court.

So what happened to UO in last year's THE world rankings? The only area where it fell was for Research, from 36.7 to 21.0. In the other indicators or indicator groups, Teaching, Industry Income, International Orientation, Research Impact (citations), it got the same score or improved.

But this is not very helpful. There are actually three components in the research group of indicators, which has a weighting of 30%, two of which are scaled. A fall in the research component might be caused by a fall in its score for research reputation, a decline in its reported research income, a decline in the number of publications, a rise in the number of academic staff, or some combination of these.

The fall in UO's research score could not have been caused by more faculty. The number of full time faculty was 1,284 in 2012-13 and 1,281 in 2013-14.

There was a fall of 7.6% in Ottawa's "sponsored research income" between 2013 and 2014 but I am not sure if that is enough to produce such a large decline in the combined research indicators.

My suspicion is -- and until THE disaggregate their indicators it cannot be anything more  -- that the problem lies with the 18% weighted survey of postgraduate teaching. Between 2015 and 2016 the percentage of survey respondents from the arts and humanities was significantly reduced while that from the social sciences and business studies was increased. This would be to the disadvantage of English speaking universities, including those in Canada, relatively strong in the humanities and to the advantage of Asian universities relatively strong in business studies. UO, for example, is ranked highly by Quacquarelli Symonds (QS) for English, Linguistics and Modern Languages, but not for Business Management Studies and Finance and Accounting.

This might have something to do with THE wanting to get enough respondents for business studies after they had been taken out of the social sciences and given their own section. If that is the case, Ottawa might get a pleasant surprise this year since THE are now treating law and education as separate fields and may have to find more respondents to get around the problem of small sample sizes. If so, this could help UO which appears to be strong in those subjects.

It seems, according to another Fulcrum article, that the university is being advised by Daniel Calto from Elsevier. He correctly points out that citations had nothing to do with this year's decline. He then talks about the expansion in the size of the rankings with newcomers pushing in front of UO. It is unlikely that this in fact had a significant effect on the university since most of the newcomers would probably enter below the 300 position and since there has been no effect on its score for teaching, international orientation, industry income or citations (research impact).

I suspect that Caito may have been incorrectly reported. Although he says it was unlikely that citations could have had anything to do with the decline, he is reported later in the article to have said that THE's exclusion of kilo-papers (with 1,000 authors) affected Ottawa. But the kilo-papers were excluded in 2015 so that could not have contributed to the fall between 2015 and 1016.

The Fulcrum article then discusses how UO might improve. M'hamed Aisati, a vice-president at Elsevier, suggest getting more citations. This is frankly not very helpful. The THE methodology means that more citations are meaningless unless they are concentrated in exactly the right fields. And if more citations are accompanied by more publications then the effect could be counter-productive.

If UO is concerned about a genuine improvement in research productivity and quality there are now several global rankings that are quite reasonable. There are even rankings that attempt to measure  things like innovation, teaching resources, environmental sustainability and web activity.

The THE rankings are uniquely opaque in that they hide the scores for specific indicators, they are extremely volatile, they depend far too much on dodgy data from institutions and reputation surveys that can be extremely unstable. Above all, the citations indicator is a hilarious generator of absurdity.

The University of Ottawa, and other Canadian universities, would be well advised to forget about the THE rankings or at least not take them so seriously.


Monday, January 09, 2017

Outbreak of Rankophilia

A plague is sweeping the Universities of the West, rankophilia or an irrational and obsessive concern with position and prospects in global rankings and a unwillingness to exercise normal academic caution and scepticism.

The latest victim is Newcastle University whose new head, Chris Day, wants to make his new employer one of the best in the world. Why does he want to do that?
"His ambition - to get the university into the Top 100 in the world - is not simply a matter of personal or even regional pride, however. With universities increasingly gaining income from foreign students who often base their choices of global rankings, improving Newcastle’s position in the league tables has economic consequences."
So Newcastle is turning its back on its previous vision of becoming a "civic university" and will try to match its global counterparts. It will do that by enhancing its research reputation.
'While not rowing back from Prof Brink’s mantra of “what are we good at, but what are we good for?”, Prof Day first week in the job saw him highlighting the need for Newcastle to concentrate on improving its reputation for academic excellence." '
It is sad that Day recognises that the core business of a university is not enough and that what really matters is proper marketing and shouting to rise up the tables.

Perhaps Newcastle will ascend into the magic 100, but the history of the THE rankings over the last few years is full of universities -- Alexandria, University of Tokyo, Tokyo Metropolitan University, University of Copenhagen, Royal Holloway, University of Cape Town, Middle East Technical University and others -- that have soared in the THE rankings for a while and then fallen, often because of nothing more than a twitch of a methodological finger.

Meanwhile Cambridge is yearning to regain its place in the QS top three and Yale is putting new emphasis on science and research with an eye on the global rankings.




Sunday, January 01, 2017

Ranking Teaching Quality

There has been a lot of talk lately about the quality of teaching and learning in universities. This has always an been an important element in national rankings such as the US News America's Best Colleges and the Guardian and Sunday Times in the UK, measured by things like standardised test scores, student satisfaction, reputation surveys, completion rates and staff student ratio.

There have been suggestions that university teaching staff need to be upgraded by attending courses in educational theory and practice or by obtaining some sort of certification or qualification.

The Higher Education Funding Council of England (HEFCE) has just published data on the number of staff with educational qualifications in English higher educational institutions.

The university with the largest number of staff with some sort of educational qualification is Huddersfield which unsurprisingly is very pleased. The university's website  reports HEFCE's assertion that “information about teaching qualifications has been identified as important to students and is seen as an indicator of individual and institutional commitment to teaching and learning.”

The top six universities are:
1.  University of Huddersfield
2.  Teesside University
3.  York St John University
4.  University of Chester
5= University of St Mark and St John
5= Edge Hill University.

The bottom five are:
104=   London School of Economics
104=   Courtauld Institute of Art
106.    Goldsmith's College
107=   University of Cambridge
107=   London School of Oriental and African Studies.

It seems that these data provide almost no evidence that a "commitment to teaching and learning" is linked with any sort of positive outcome. Correlations with the overall scores in the Guardian rankings and the THE Teaching Exercise Framework simulation  are negative (-550 and -410 [-204 after benchmarking]).

In addition, the correlation between the percentage of staff with teaching qualifications and the Guardian indicators is negative for student satisfaction with the course (-161, insignificant), student satisfaction with teaching (-.197, insignificant), value added (-.352) and graduate employment (-.379).

But there is a positive correlation with student satisfaction with feedback (.323).

The correlations with the indicators in the THE simulation were similar: graduate employment -.416, (-.249 after benchmarking), -449 completion (-.130, insignificant, after benchmarking), and -.186 student satisfaction, insignificant (-.056 after benchmarking, insignificant).

The report does cover a variety of qualifications so it is possible that digging deeper might show that some types of credentials are more useful than others. Also, there are intervening variables: Some of the high scorers, for example, are upgraded teacher training colleges with a relatively low status and a continuing emphasis on education as a subject.

Still, unless you count a positive association with feedback, there is no sign that forcing or encouraging faculty to take teaching courses and credentials will have positive effects on university teaching.