National Taiwan University
With any ranking, it is important to know exactly what is being ranked prior to any reading of results. The National Taiwan University Ranking (formerly known as HEEACT) is a measurement of the performance of a university in articles published in peer reviewed journals on the Web of Science. It does not measure teaching performance, employability, nothing of student experience, is not interested in inputs, funding, infrastructure, patents or innovation. In this sense, it is not really a university ranking in the Times Higher or QS mould. Because of its limited mission, it is quite detailed in the way that it explores academic impact, certainly in a more helpful way than the larger more comprehensive composite rankings. It has used largely the same metrics and scoring system since becoming the NTU in 2013, from its earlier incarnation as the HEEACT ranking.
The ranking measures the top 500 universities worldwide on an institutional level and field level, and the top 300 in each subject according to Clarivate Web of Science categories.
The ranking is also subcategorised into six field rankings, and 14 individual subject rankings. This means that while for field the ranking is moderately comprehensive (with the exception of arts and humanities, for whom a ranking such as this would be of relatively limited use as it only measures articles in peer reviewed journals), the subject rankings is somewhat limited to a few areas of knowledge of specific interest to Taiwan, especially when it is considered the the Web of Science itself has 26 separate categories of research.
|RESEARCH PRODUCTIVITY:11 YEARS ARTICLES||Articles published to the SCI-E and SSCI over the past 11 years (2206-2016). The count is taken in April every year, to ensure that the full year’s count is made.||10%||InCites Essential Science Indicators|
|RESEARCH PRODUCTIVITY:CURRENT ARTICLES||Articles published to the SCI-E and SSCI over the past year. The count is taken in April every year, to ensure that the full year’s count is made.||15%||InCites Essential Science Indicators|
|RESEARCH IMPACT:11 YEARS CITATIONS.||The total number of citations a university’s output has received over the past 11 years.||15%||Web of Science SCI and SSCI|
|RESEARCH IMPACT: CURRENT CITATIONS||The number of citations a university has received over the past two years.||10%||InCites Essential Science Indicators|
|RESEARCH IMPACT:AVERAGE CITATIONS||The average number of citations that the institution has received over the past 11 years, which attempts to balance distortions created by large institutions||10%||Web of Science SCI and SSCI|
|RESEARCH EXCELLENCE:H-INDEX||The Hirsch index, calculated for individual researchers by the formula of citations/papers published shows||10%||Web of Science SCI and SSCI|
|RESEARCH EXCELLENCE:Highly Cited Papers||Number of papers on the Highly Cited list over the past ten years (current range 2005-2015)||15%||InCites Essential Science Indicators“Highly Cited Papers”|
|RESEARCH EXCELLENCE:Current year articles in highly cited journals (2014-2015)||Two year count of the top 1% of articles in terms of citation for the two preceding years||15%||Web of Science SCI and SSCI|
The NTU exclusively Clarivate Web of Science and Essential Science Indicators for its analysis. This generally gives a lower number of included papers for Brazilian universities than Scopus/Scival. It does, however, mean that the average number of citations will be higher, and the highly cited work will be broadly very similar. Therefore, it is unlikely that the lower number of indexed papers will be extremely detrimental to the score.
Because 50% of the weighting in this ranking is taken from performance over the previous decade (11 Year Articles= 10%, 11 Year Citations = 15%, Average Citations = 10%, Highly Cited Papers= 15%), and a further 10% for the H-Index is not time sensitive, just 40% of the ranking is subject to large amounts of change over the course of one year to another. This means that the huge leaps and falls in ranking position seen in other rankings do not generally occur in the NTU. It is far more probable that universities will more or less maintain their position year on year, even with quite large changes in score from one year to the next.
Size and score normalisation
The NTU is not size normalised (per FTE staff or per student) for any of its individual metrics that go into creating the ranking. The normalisations for size, where they exist, are by numbers of papers; the average citations per paper (10%), and the institutional H-Index (10%) are measures of high concentration of highly cited papers among a given set, but not of the institution as a whole (i.e. if a large number of academic staff publish nothing at all, but the ones that do are highly cited, then the institution will score highly).
Therefore, the NTU will naturally favour large institutions with high numbers of active scholars more than they will favour smaller, research intensive institutions. We would expect, therefore, that the assessment will naturally favour USP as it is the largest, with the largest number of active scholars, UNESP should also perform well, while UNICAMP will be slightly less favoured as it is smaller, and therefore does not produce as much research. In the overall ranking, the final column (Reference Ranking) gives the total score normalised per FTE as a ranking position. This gives the measure of research intensity. It is not included in the count for the ranking as a whole. This is a deliberate decision, reflecting a desire to avoid double counting problems present in some rankings by counting efficiency measures alongside their constituent components.
The NTU is normalised using a t-score. The t-score is a standard score very similar to the z-score common in other rankings. It is commonly used in educational assessments and quality assurance exercises. The z-score is taken from the sample mean and standard deviation, but then adjusted so that the sample mean is 50 and the scale runs from 0-100 on a normal distribution.