Why is the LSE not one of the top Universities in the world according to the Academic Ranking of World Universities? I scattered some thoughts on the UK Higher Education system in an article on my blog the other month and promised to look and see what Shanghai Jiao Tong University’s methodology thought of, what I thought to be three highly competitive British Universities, i.e. LSE, Sussex and Warwick, which had failed to make the top 100 of their 2007 ranking. I have come to the conclusion that what seems to me an anomaly, illustrates either a flaw in the methodology, or a misuse by me as the ranking’s design goal does not meet my needs.

However the same criticisms I have discovered are also mentioned on Wikipedia in their article on ARWU as part of a discussion on University Ranking. On further study, I feel the breadth of the index is incredibly narrow. I also question the appropriateness of the individual scores for the purposes they claim. The use of the survey by the Economist and EU Commission and its eco-system really needs to be questioned.

The ranking list begins to struggle to distinguish between the institutions beyond No. 99, and groups the lower ranking into large groups. The University of Sussex is in the first of these groups, 102-150, LSE is in the next group 151-202 and Warwick in the next, 203-304.

If one examines the methodology we begin to get some explanations. One of the categories measured in the methodology is “Articles published in Nature and Science”. This is one of the indicators measured to assess the quality of an institution’s “Research Output”. The points for this are allocated over the other categories, but I have not discovered how. There is a problem here. Firstly, these are both English language publications, and thus may bias high scores towards institutions in English speaking countries, helping the US & UK dominance, and possibly being part of the explanation of Canada’s 5th place position. (It might be interesting to calculate the distribution of the top 100 by language). Secondly, a number of institutions would not consider these publications documents of record for their primary research focus. Shanghai Jiao Tong University have developed a work around for those institutions specialising in Humanities and Social Sciences, which it applies to the LSE.

I have examined the base data, and cannot apply the published weights to the published scores and get the same total score as Shanghai Jiao Tong University. The University publish its scores and weighting so I have recreated the summary scores for the purposes of my analysis. I have also designed two alternative weightings, one which allocates the 20% points allocated to the publication of articles in Nature & Science across the remaining categories in proportion to their contribution. The second method, seeks to keep the “Research Output” score at 40% and allocates the missing 20% to the second RO indicator score, “Articles in Science Citation Index-expanded, Social Science Citation Index”. The original weights by category are as follows,


Quality of Education Quality of Faculty Research Output Size
Weights 10% 40% 40% 10%
Factors 1 2 2 1


The final problem with having two calculation methods is when do you apply the second method i.e. when is Articles published in Science and Nature an irrelevant indicator. ( I am sure there are some who’d argue never ). I have calculated scores using both schemes, the original and my Research Output orientated scheme. This allows me to compare the effect of the different weights on the ranking. I have applied these techniques to a number of the UK Universities, and also applied the Guardian’s teaching quality score to those Universities to see if there was much of a difference. The Guardian’s teaching quality score is departmentally based, and I chose to use the ICT departmental scores. Applying my revised “Research Output” score doesn’t have much of an effect on the position of the LSE, there are one or two some interesting differences, but it would seem to me that we are back to asking how good the indicators are. I noted in my previous article that the methodology favoured science and anecdotally universities with large medical and bioscience faculties. It might be interesting to look at the big movers and examine the methodological causes of the changes.

I have come to the conclusion that the Shanghai University method’s indicators are too narrow to easily answer the questions I am asking and the Guardian’s research cannot be used to rank the institutions. They only evaluate departments, and aim to evaluate the undergraduate teaching experience. In their notes on their methodology, the Guardian says

To use the indicators’ absolute values would make it virtually impossible to produce an overall table for the institutions, since their position would be dependent on what subjects they teach, rather than on how well they teach it………

and added that

Note that we don’t include research funding, figures from the research assessment exercise or data in that line – this is supposed to be a ranking for undergraduates, not a health check for the university as a whole.

Tediously, it seems that I am repeating the criticisms made by sufficient others to have made it to the University Rankings page on Wikipedia but looking into the data always improves one’s understanding.

So the survey may over estimate English speaking institutions success, it probably devalues non pure science teaching, and it uses very few indicators. These factors may explain why the ‘wisdom of crowds’, market evaluation of entry grades required, comes out with very different answers about the LSE. My final conclusion is that this survey is seen by the EU, the Commission and its advisers as too important. Someone should do another one, but what is really needed is an economic, or political model that defines a successful University. These are issues for public policy makers, and increasingly in the UK the people funding tertiary education which is becoming the students and their families. But if looking to attend a UK University, I’d thoroughly recommend the relevant Guardian Guide. They are published each year to help the school leaving cohort, and it helped me advise my children over the last 5 years, to the extent they let me.


The UK Universities in the 100-150 group include Glasgow, Leeds, Liverpool, Sussex.

Those in the top 100 are Cambridge, Oxford, Imperial, UCL, Manchester, Edinburgh, Bristol, Sheffield, Nottingham, Kings College London and Birmingham.

The Guardian does not score the LSE for teaching ICT.

Canada is 5th beating France, Italy and Spain, all with more people and with similar or greater per capita GDP. I am not looking to denigrate Canada’s tertiary education system.

Good British Universities
Tagged on:                         

2 thoughts on “Good British Universities

  • 3rd February 2016 at 3:07 pm

    This article was originally written in July 2009, as dated, and is one of three I wrote about the Shanghai Jiao Tong University’s world university ranking survey. I copied it to this blog in Feb 2016. There were some surprising anomalies in the ranking table and this article looks at them. It was based on some data crunching I had done to see if one could learn anything about Britain’s university sector and the generation of ICT intellectual property. This article illustrates the difficulty in writing articles of this nature; in particular everything one writes assumes some knowledge on the part of the reader. This article assumes too much knowledge about several subjects.

  • Pingback:Solidarity with the UCU – davelevy.info

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: