THE
Times Higher Education

Website

The Times Higher and Quacquarelli Symonds rankings split in 2009 over disputes over the validity of results being produced, and especially The new THE uses a set of thirteen different metrics in an attempt to represent a broader variety of institutional aspects.

They can be broken into five separate areas; research volume, citation and impact, teaching, internationalisation and industry income. In an attempt to take into account the diversity of institutional profiles in global higher education, the THE produces Z scores based on deviation from the mean score of the dataset before producing a cumulative probability score for the metric, allowing for theoretically fair comparison of data. However, while it can statistically control for size, there are still various benefits associated to institutional models and sizes that cannot be statistically adjusted. For example, smaller institutions like Caltech have an added benefit of being extremely small and with limited responsibilities towards teaching or extension. This means that even when normalized for size, the range of functions performed by USP academic staff across different units is far broader than in somewhere like California Tech, which can maintain an extremely intense focus on research.

Compiling Team and Financing

The ranking is financed by the Times Higher Education group, a British commercial newspaper publication that depends on advertising revenue and payment for classified advertising in its jobs section.

Reach

According to the press kit for 2017, the Times Higher Rankings are consumed by 33% of prospective students, in over 150 countries, with 18 million unique visitors a year. It is most commonly read by junior academics (31%) followed by senior academics (18%), and its geographical distribution of visitors is as follows:

Region % of respondants
Asia 32%
United Kingdom 25%
United States 13%
Continental Europe 15%
Canada 5%
Oceania 4%
Latin America 3%
Others 3%

Weaknesses

The Times Higher has a variety of methodological weaknesses that seriously call into question its value as a ranking. Among them, serious concerns around the validity of reputation surveys (Axel-Berg, 2015; Ordorika, 2013; Marginson, 2014), because of their tendency to reflect halo influence, general reputation without the academics surveyed having privileged knowledge of what they are being asked to evaluate. These surveys also heavily favour Anglo-American institutions at the centre of world HE, as well as some Asian institutions as a result of the heavily skewed distribution of respondents towards the US and Asia. The teaching metric is inevitably prejudiced by the difficulty of representing teaching excellence or outcome across such a huge diversity of educational systems and social and economic contexts. The Times Higher itself admits that the metrics for teaching are rather more descriptive than normative, and so comparison in this way is arguably ill-suited to a ranking exercise. The THE, especially at its lower echelons, is very prone to wild fluctuations in position caused by small variations in statistics. This is because the ranking is composed of z-scores, meaning that large parts of the group 200-500 are in fact very close together. Because of this, huge rises and falls can be triggered by what amounts to little more than statistical noise. The THE also tends to change its methodology every year, and so year-to-year comparison in performance is difficult, if not impossible.

Metrics and Methodology

Area Metric Weighting Source Teaching Reputation Survey Global reputation survey of 10,323 responses from 133 countries, asked to name 15 universities that they believe are the best in each category (research and teaching), based on their own experience. Data aims at a representative sample according to UNESCO data: 19% North America. 33 % Asia Pacific, 27% Western Europe, 11% Eastern Europe, 6% Latin America, 3% Middle East and 2% Africa. 15% External: Run for THE by Elsevier. Individual responses not released. Teaching: Staff: Student Ratio Based on the belief that the lower the number of students per member of staff, the more intensive and attentive the learning environment will be. 4.5% Internally Reported data Teaching: Doctorate: Bachelor’s Ratio The more post-graduate intensive environment is supposedly representative of a more rigorous academic environment. 2.25% Internally Reported data Teaching: Doctorates-awarded- to-academic-staff ratio:

The more doctorates produced per member of staff is indicative of a more attentive and research intensive environment. 6% Internally Reported data Teaching: Institutional Income Institutional income is scaled against academic staff numbers and normalised for purchasing-power parity. It indicates an institution’s general status and gives a broad sense of the infrastructure and facilities available to students and staff. 2.25% Internally Reported data Research: Reputation Survey Global reputation survey of 10,323 responses from 133 countries, asked to name 15 universities that they believe are the best in each category (research and teaching), based on their own experience. Data aims at a representative sample according to UNESCO data: 19% North America. 33 % Asia Pacific, 27% Western Europe, 11% Eastern Europe, 6% Latin America, 3% Middle East and 2% Africa.

18% External: Run for THE by Elsevier. Individual responses not released. Research: Research income Research income is scaled against academic staff numbers and adjusted for purchasing-power parity (PPP). 6% Internally Reported data Research: Research Productivity Number of papers published on the Scopus index per scholar (whether this is FTE academic staff or includes part time or PG is unclear) 6% Elsevier Reed Scopus Index plus internally reported data Impact: Citations Elsevier examined more than 56 million citations to 11.9 million journal articles, conference proceedings and books and book chapters published between 2011 and 2015. The citation counts are normalised by area and mixed half and half between country adjusted scores and total scores. Papers with over 1000 authors are counted fractionally according to contribution.

30% Elsevier Reed Scopus Index International: International-to-domestic-student ratio: Number of foreign citizens studying at the university (unclear whether this only covers to degree level programs) 2.5% Internally reported data International: International-to-domestic-staff ratio:

Number of international academics compared to domestic. It is unclear whether this is defined by visa/settlement status, passport status or contract type. 2.5% Internally reported data International: International collaboration:

Proportion of papers published with at least one academic working in a foreign university. 2.5% Elsevier Reed Scopus Index Industry: Industry Income

Research income an institution earns from industry (adjusted for PPP), scaled against the number of academic staff it employs. 2.5% Internally reported data

Reputation survey

Unlike the QS ranking, the Times Higher ranking conducts its survey anew every year, collating over 10,000 responses between January and March of each year. The split of disciplines is as follows: Physical sciences (16 %), Social sciences (15%). The life sciences, clinical and health, and engineering each with 14 %, Business and economics 13 %, Arts and humanities 9% Computer science 5%.

The regional response rate is the following:

19% North America. 33% from the Asia Pacific region, 27 % from Western Europe, 11% from Eastern Europe, 6% from Latin America, 3% from the Middle East 2% from Africa. They are normalised according to UNESCO data on distribution of researchers, meaning that for under-represented regions the number is increased. There is a suspicion that this magnifies scores for universities in the ‘centre’ with strong links around the world, and less for those in the periphery, who tend to maintain strongest links with their own region and the centre, not with other peripheral universities.

Bibliometrics

Citations data is a score per institution calculated by Elsevier from 2015 (until 2014 it was supplied by Web of Science). Elsevier provide the Field-Weighted Citation Impact (FWCI) score, per subject and overall. The FWCI score indicates how the number of citations received by an entity’s publications compares with the average number of citations received by all other similar publications. ‘Similar publications’ are understood to be publications in the Scopus database that have the same publication year, type, and discipline, as defined by the Scopus journal classification system. An FCWI of 1.00 indicates the global average. In 2015 – 2016 papers with more than 1,000 authors were excluded because they were having a disproportionate impact on the citation scores of the small number of universities. The year these papers have been reintegrated using a fractional counting approach that ensures that all universities where academics are authors of these papers will receive at least 5 per cent of the value of the paper. Where those that provide the most contributors to the paper receive a proportionately larger contribution. The total number of publications overall, plus the total number of publications with international coauthorship per institution are collected according to ‘sufficient publications’ criteria (the state universities comfortably fulfil these, so they are not reproduced here).

Institutional Reporting and Attribution

Self reported data are submitted via a dedicated portal. 47.5% of this ranking is at least somewhat dependent on internally reported data. This gives ample scope for improved reporting practices leading to better understanding of position.

All data on bibliometrics are conferred via a THE Institution ID, a unique identifier to pull university information from the Scopus database. This will require further investigation as to whether it captures all variations of the Anglicised names of universities (i.e “University of São Paulo School of Medicine” and “Sao Paulo University Faculty of Medicine” as one and the same thing).

Ranking compilation process and rules according to PwC audit

Data collection: A named representative from each institution submits and authorises its institutional data for use in the rankings. Data collection: Times Higher Education will not self-submit data for an institution without positive confirmation from the named representative of the institution. Data collection: Prior to submission of data within the portal, the draft data undergoes automatic validation checks reviewed by the named representative. Processing and exclusions Institutions must meet seven criteria in order to be included in the Overall Ranking.

Processing and exclusions THE management reviews and approves all institution submissions data for appropriateness and accuracy, based on prior year values and gaps within datasets.

Processing and exclusions Data provided by institutions for financial information is converted into USD using international PPP exchange rates.

Processing and exclusions Institution-level bibliometric (Scopus) and reputation survey data obtained from Elsevier is mapped to THE institution data via THE’s institution ID.

Ranking and scoring Once the final population of institutions and indicators has been prepared, the Rankings are generated by weighting the indicators.

Final reporting Once indicators and pillars have been calculated for each subject and overall, the results are used to calculate the Main Rankings.

Final reporting The Main Rankings are subsequently reported on the THE website.

Related Rankings

  • World Reputation Rankings
  • BRICS and Emerging Economies
  • Young Universities Ranking (UNESP only)
  • Latin America University Ranking