Shanghai Jiao Tong Ranking
The Shanghai Jiao Tong ranking was the first of the global university rankings, launched in 2003 as part of China’s Project 985 to modernise their higher education system.
Project with the specific intention of creating elite universities as a measure of a world class system. The findings of the project led to the creation of the C9 League of elite Chinese universities as a challenge to the elite US institutions. The Shanghai Jiao Tong ranking should be understood in these terms; it was never designed to be a comprehensive management tool to represent the totality of universities, or universities further down the rankings. It is the most ‘elite’ of all the university rankings; it is specifically geared towards grading universities against Anglo-American institutions. Much of its weighting is given over to metrics that only very few institutions score at all;
Nobel Laureates (with literature and peace prizes excluded). To date, Brazil has produced a single Nobel Laureate in Peter Medawar in 1947, which would be given a fractional count of just 10% of a recent winner, if he had even been affiliated to a Brazilian university at the time of his win; he worked at University College London. Indeed, South America has only produced two further eligible winners from Argentina, one also in 1947, and the other in 1980 based in Cambridge. For Fields medals, South America has produced just one winner in 2014, Artur Avila at IMPA in Rio de Janeiro.
This elitism is also reflected in the normalisation of the ranking; while other rankings use z-scores based on the mean of the sample of universities, the Shanghai Jiao Tong weights fractionally from the top ranked institution, which is given a score of 100. This means that ranking position rapidly loses descriptive power outside the top 100 ranked institutions, meaning that from 100 downwards it is grouped into 101-150, 151-200 etc.
While the Shanghai Jiao Tong ARWU is one of the highest profile and most objective rankings for assessing global elite universities, it is not the most important for the São Paulo state universities as it is not as descriptive of lower ranking institutions.
Metrics
Table 1: Metrics for ARWU 2017
Area | Metric | Wheighting | Source |
---|---|---|---|
Quality of Education (Alumni) | Alumni of an institution winning a Nobel (not peace or literature) or Fields medal.The weight is 100% for alumni from 2001-2010, 90% for 1991-2000, 80% for 1981-1990, and so on, to 10% for alumni obtaining degrees in 1911-1920. | 10% | External: Nobel Foundation Fields: International Mathematical Union |
Quality of Academic Staff (Award) | The total number of the staff of an institution winning Nobel Prizes in Physics, Chemistry, Medicine and Economics and Fields Medal in Mathematics. Staff is defined as those who work at an institution at the time of winning the prize. The weight is 100% for winners after 2011, 90% for 2001-2010, 80% for 1991-2000, 70%. If a winner is affiliated with more than one institution, each institution is assigned the reciprocal of the number of institutions. For Nobel prizes, if a prize is shared by more than one person, weights are set for winners according to their proportion of the prize. | 20% | External: Nobel Foundation Fields: International Mathematical Union |
Quality of Academic Staff (HiCi) | The number of Highly Cited Researchers selected by Thomson Reuters. The Highly Cited Researchers list issued in December 2015 (2015 HCR List as of December 1 2015) was used for the calculation of HiCi indicator in ARWU 2016. Only the primary affiliations of Highly Cited Researchers are considered. | 20% | External: Clarivate Analytics (formerly Thomson Reuters): 2016 Highly Cited List for ARWU 2017 |
Research Output (N&S) | The number of papers published in Nature and Science between 2011 and 2015. To distinguish the order of author affiliation, a weight of 100% is assigned for corresponding author affiliation, 50% for first author affiliation (second author affiliation if the first author affiliation is the same as corresponding author affiliation), 25% for the next author affiliation, and 10% for other author affiliations. Only publications of ‘Article’ type is considered. | 20% | Clarivate Analytics Web of Science |
Research Output (PUB) | Total number of papers indexed in Science Citation Index-Expanded and Social Science Citation Index in 2015. Only publications of ‘Article’ type is considered. When calculating the total number of papers of an institution, a special weight of two was introduced for papers indexed in Social Science Citation Index. | 20% | Clarivate Analytics Web of Science |
Research Output (PCP) | The weighted scores of the above five indicators divided by the number of full-time equivalent academic staff. If the number of academic staff for institutions of a country cannot be obtained, the weighted scores of the above five indicators is used. | 10% | Data are obtained from national agencies such as National Ministry of Education, National Bureau of Statistics, National Association of Universities and Colleges, National Rector’s Conference. |
Specificity
The use of Nobel Laureates and Fields Medals to make up 30% of the total score makes this ranking extremely difficult to progress in. The vast majority are based in US, UK and some European institutions, and obviously the nature of the award means that there are very few winners, and therefore very few institutions who score on these metrics. Furthermore, there is little that an institution can do to improve this situation; there are no public policies that could reliably win a prize that is so scarce, and as the count is back-dated to the 1950s (albeit gradually), it naturally favours historically established institutions.
The entries to the Highly Cited List naturally favors the proliferation of a few research ‘superstars’ over the organic growth of highly cited groups. There has been much debate over whether this in fact fuels a transfer market analogous to that of the football transfer market, with a few scholars earning much more than their peers in recognition of the quality of research they produce. China, for example, has chosen to follow this route, where researchers can earn multiples of their faculty colleagues’ salaries in financial bonuses for publishing in journals that rank highly on the Journal Impact Factor index. The rigid career progression structures in the state universities make this option impractical, as well as being of questionable desirability for forming strong research programmes as opposed to strong researchers. Furthermore, as access to articles is increasingly search engine-driven rather than by journal titles, meaning that articles are found via keyword searches rather than looking at specific volumes, the correlation of journal impact factor to article level impact is slowly decreasing.
Needless to say, these researchers tend to be found in US institutions, where pay structures are much more liberalised. Analysis of subject based rankings, as well as information from CWTS Leiden suggest that citations are spread more evenly across Brazilian universities, with fewer superstars. This is not necessarily a negative, and is more suggestive of organic growth than the Saudi Arabian experience when universities contracted Highly Cited researchers for large amounts of money, but it does probably preclude significant entry into this metric. Indeed, none of the three universities score on this metric.
With 50% of the ranking already either near impossible or extremely difficult to score in, the remaining 50% is where the three can focus their attempts.
Volume of publication (PUB) is not size normalised, so this metric naturally favours USP over UNICAMP or UNESP, simply because its active community of scholars is so much larger. The state universities publish a huge volume of material, well above the values of peers in the same groups. In this respect, further advances in scientific production would not be of notable benefit to the state universities.
Publications in Nature and Science are a loose proxy for research quality and citation impact, and all three universities are performing within the bounds of normal within the groups in which they find themselves, although they are all at the lower bounds of this range, and so could concentrate on the publication of more articles in this group.
Weighting and normalisation
As mentioned above, unlike the Times Higher and Quacquarelli Symonds, the ARWU weights its metrics against the top ranking institution for each metric, which is given a score of 100, and all others are given scores as a percentage of this number. In z-normalised rankings scores are weighted against the sample mean, meaning that institutions further down the ranking are better represented. For the ARWU, institutions outside of the top 100 in any metric typically score very lowly, and as such are prone to small variations having large effects on ranking position. Because of this, institutions outside of the top 100 are grouped into 50, meaning that changes in ranking position are difficult to assess.