Categories
Ranking Monitor Reference

CWTS Leiden

The Leiden ranking seeks to represent only scholarly impact and collaboration through the Clarivate Web of Science. As such it is much more limited in scope than the main rankings.

Leiden University

The Leiden ranking seeks to represent only scholarly impact and collaboration through the Clarivate Web of Science. As such it is much more limited in scope than the main rankings. Within this limitation, however, it aims to represent only two very important aspects present in all rankings at a level of granularity that is unavailable in the other rankings. Therefore perfomrance in CWTS Leiden can reveal important insights about the publication behaviour of a university, which can give insights into other rankings, where ‘citations’ metrics tend to be highly aggregated and difficult to interpret on a deep level.

In contrast with most other rankings, the CWTS avoids overaggregation and weighting problems by leaving all metrics as user defined and not as a full composite ranking. It also offers other representations, such as maps and graphs rather than ordinal rankings.

Compiling Team and Financing

The CWTS team is based at the Centre for Science and Technology Studies. University of Leiden, staffed by researchers and academics and is fully public, rather than commercial.

  • Coordination Mark Neijssel
  • Nees Jan van Eck
  • Ludo Waltman
  • Paul Wouters

Identification of universities

  • Clara Calero-Medina
  • Maia Francisco Borruel
  • Giulia Moriconi
  • Andrea Reyes Elizondo
  • Martijn Visser

Impact indicators

  • Nees Jan van Eck
  • Ludo Waltman

Industry collaboration indicators

  • Robert Tijssen
  • Wout Lamers
  • Bert van der Wurff
  • Alfredo Yegros

Distance-based collaboration indicators

  • Robert Tijssen
  • Wout Lamers
  • Nees Jan van Eck
  • Ludo Waltman

Weaknesses

  • Limited scope.
  • Focuses only on scholarly impact, and not teaching, third mission or anything else.
  • Difficult to use, and does not produce headline results.

Indicators

The CWTS Leiden Ranking 2017 is based exclusively on bibliographic data from the Web of Science database produced by Clarivate Analytics.

AreaMetricSource
IMPACT P(top 1%) and PP(top 1%).The number (P) and the proportion (PP) of a university’s publications that, compared with other publications in the same field and in the same year, belong to the top 1% most frequently cited.Clarivate Web of Science
IMPACT P(top 10%) and PP(top 10%).The number and the proportion of a university’s publications that, compared with other publications in the same field and in the same year, belong to the top 10% most frequently cited.Clarivate Web of Science
IMPACT P(top 50%) and PP(top 50%)The number and the proportion of a university’s publications that, compared with other publications in the same field and in the same year, belong to the top 50% most frequently cited.Clarivate Web of Science
IMPACT TCS and MCSThe total and the average number of citations of the publications of a university.Clarivate Web of Science
IMPACT TNCS and MNCSThe total and the average number of citations of the publications of a university, normalized for field and publication year. An MNCS value of two for instance means that the publications of a university have been cited twice above the average of their field and publication year.Clarivate Web of Science
COLLABORATION P(collab) and PP(collab).The number and the proportion of a university’s publications that have been co-authored with one or more other organizations.
COLLABORATION P(int collab) and PP(int collab).The number and the proportion of a university’s publications that have been co-authored by two or more countries.
COLLABORATION P(industry) and PP (industry)The number and the proportion of a university’s publications that have been co-authored with one or more industrial organizations. All private sector for profit business enterprises, covering all manufacturing and services sectors, are regarded as industrial organizations. This includes research institutes and other corporate R&D laboratories that are fully funded or owned by for profit business enterprises. Organizations in the private education sector and private medical/health sector (including hospitals and clinics) are not classified as industrial organizations.
COLLABORATION P(<100 km) and PP(<100 kmThe number and the proportion of a university’s publications with a geographical collaboration distance of less than 100 km, where the geographical collaboration distance of a publication equals the largest geographical distance between two addresses mentioned in the publication’s address list.
COLLABORATION P(>5000 km) and PP(>5000 km).The number and the proportion of a university’s publications with a geographical collaboration distance of more than 5000 km.

Universities’ perfomance

Indicators

The CWTS Leiden Ranking 2017 offers a sophisticated set of bibliometric indicators that provide statistics on the scientific impact of universities and on universities’ involvement in scientific collaboration. The indicators available in the Leiden Ranking are discussed in detail below.

Publications

The Leiden Ranking is based on publications in the Web of Science database produced by Clarivate Analytics. The most up-to- date statistics made available in the Leiden Ranking are based on publications in the period 2012–2015, but statistics are also provided for a number of earlier periods. Web of Science includes a number of citation indices. The Leiden Ranking uses the Science Citation Index Expanded, the

Social Sciences Citation Index, and the Arts &; Humanities Citation Index. Only publications of the Web of Science document types article and review are taken into account. The Leiden Ranking does not consider book publications, publications in conference proceedings, and publications in journals not indexed in the above- mentioned citation indices of Web of Science.

The Leiden Ranking takes into account only a subset of the publications in the Science Citation Index Expanded, the Social Sciences Citation Index, and the Arts &; Humanities Citation Index. We refer to the publications in this subset as core publications. Core publications are publications in international scientific journals in fields that are suitable for citation analysis. In order to be classified as a core publication, a publication must satisfy the following criteria:

  • The publication has been written in English.
  • The publication has one or more authors. (Anonymous publications are not allowed.)
  • The publication has not been retracted.
  • The publication has appeared in a core journal.

The last criterion is a very important one. In the Leiden Ranking, a journal is considered a core journal if it meets the following conditions:

  • The journal has an international scope, as reflected by the countries in which researchers publishing in the journal and citing to the journal are located.
  • The journal has a sufficiently large number of references to other core journals, indicating that the journal is situated in a field that is suitable for citation analysis. Many journals in the arts and humanities do not meet this condition. The same applies to trade journals and popular magazines.
  • In the calculation of the Leiden Ranking indicators, only core publications are taken into account. Excluding non-core publications ensures that the Leiden Ranking is based on a relatively homogeneous set of publications, namely publications in international scientific journals in fields that are suitable for citation analysis. The use of such a relatively homogeneous set of publications enhances the international comparability of universities. It should be emphasized that non-core publications are excluded not because they are considered less important than core publications. Non-core publications may have an important scientific value. About one-sixth of the publications in Web of Science are excluded because they have been classified as non-core publications.

Our concept of core publications should not be confused with the Web of Science Core Collection. The Web of Science Core Collection represents a subset of the citation indices available in Web of Science. As explained above, the core publications on which the Leiden Ranking is based represent a subset of the publications in the Science Citation Index Expanded, the Social Sciences Citation Index, and the Arts &; Humanities Citation Index.

A list of core and non-core journals is available in this Excel file.

Size-dependent vs. size-independent indicators

Except for the publication output indicator P, all indicators included in the Leiden Ranking have two variants: A size-dependent and a size-independent variant. In general, size-dependent indicators are obtained by counting the absolute number of publications of a university that have a certain property, while size-independent indicators are obtained by calculating the proportion of the publications of a university with a certain property. For instance, the number of highly cited publications of a university and the number of publications of a university co-authored with other organizations are size-dependent indicators. The proportion of the publications of a university that are highly cited and the proportion of a university’s publications co- authored with other organizations are size-independent indicators. In the case of size-dependent indicators, universities with a larger publication output tend to perform better than universities with a smaller publication output. Size-independent indicators have been corrected for the size of the publication output of a university. So when size-independent indicators are used, both larger and smaller universities may perform well.

Collaboration indicators

Some limitations of the above indicators need to be mentioned. In the case of the P(industry) and PP(industry) indicators, we have made an effort to identify industrial organizations as accurately as possible. Inevitably, however, there will be inaccuracies and omissions in the identification of industrial organizations. In particular, the identification of industrial organizations in publications from 2015 is incomplete, causing a downward bias in the P(industry) and PP(industry) indicators for the period 2012–2015. In the case of the P(<100 km), pp(<100 km), P(>5000 km), and PP(>5000 km) indicators, we rely on geocoding of addresses listed in Web of Science. There may be some inaccuracies in the geocoding that we have performed, and for addresses that are used infrequently no geocodes may be available. In general, we e xpect these inaccuracies and omissions to have only a small effect on the indicators.

Counting method

The impact indicators in the Leiden Ranking can be calculated using either a full counting or a fractional counting method. The full counting method gives a full weight of one to each publication of a university. The fractional counting method gives less weight to collaborative publications than to non-collaborative ones. For instance, if a publication has been co-authored by five researchers and two of these researchers are affiliated with a particular university, the publication has a weight of 2 / 5 = 0.4 in the calculation of the impact indicators for this university. The fractional counting method leads to a more proper field normalization of impact indicators and therefore to fairer comparisons between universities active in different fields. For this reason, fractional counting is the preferred counting method for the impact indicators in the Leiden Ranking. Collaboration indicators are always calculated using the full counting method.

Trend analysis

To facilitate trend analyses, the Leiden Ranking provides statistics not only based on publications from the period 2012–2015, but also based on publications from six earlier periods: 2006–2009, 2007–2010, 2008–2011, 2009–2012, 2010–2013, and 2011–2014. The statistics for the different periods are calculated in a fully consistent way. For each period, citations are counted until the end of the first year after the period has ended. For instance, in the case of the period 2006–2009 citations are counted until the end of 2010, while in the case of the period 2012–2015 citations are counted until the end of 2016.

Stability intervals

Stability intervals provide some insight into the uncertainty in bibliometric statistics. A stability interval indicates a range of values of an indicator that are likely to be observed when the underlying set of publications changes. For instance, the PP(top 10%) indicator may be equal to 15.3% for a particular university, with a stability interval ranging from 14.1% to 16.5%. This means that the PP(top 10%) indicator equals 15.3% for this university, but that changes in the set of publications of the university may relatively easily lead to PP(top 10%) values in the range from 14.1% to 16.5%. The Leiden Ranking employs 95% stability intervals constructed using a statistical technique known as bootstrapping. The CWTS Leiden Ranking 2017 includes 903 universities worldwide. These universities have been selected based on their number of Web of Science indexed publications in the period 2012–2015. As discussed below, a sophisticated data collection methodology is employed to assign publications to universities.

Identification of universities

Identifying universities is challenging due to the lack of clear internationally accepted criteria that define universities. Typically, a university is characterized by a combination of education and research tasks in conjunction with a doctorate-granting authority. However, these characteristics do not mean that universities are particularly homogeneous entities that allow for international comparison on every aspect. As a result of its focus on scientific research, the Leiden Ranking presents a list of institutions that have a high degree of research intensity in common.

Nevertheless, the ranking scores for each institution should be evaluated in the context of its particular mission and responsibilities, which are strongly linked to national and regional academic systems. Academic systems – and the role of universities therein – differ substantially between countries and are constantly changing. Inevitably, the outcomes of the Leiden Ranking reflect these differences and changes.

The international variety in the organization of academic systems also poses difficulties in terms of identifying the proper unit of analysis. In many countries, there are collegiate universities, university systems, or federal universities. Instead of applying formal criteria, whenever possible we follow common practice based on the way these institutions are perceived locally. Consequently, we treat the University of Cambridge and the University of Oxford as entities, whereas in the case of the University of London we distinguish between the constituent colleges. For the United States, university systems (e.g. the University of California) are split up into separate universities. The higher education sector in France, like in many other countries, has gone through several reorganizations in recent years. Many French institutions of higher education have been grouped together in Communautés d’Universités et Etablissements (COMUEs), succeeding the earlier Pôles de Recherche et d’Enseignement Supérieur (PRES). Except in the case of full mergers, the Leiden Ranking still distinguishes between the different constituent institutions.

Publications are assigned to universities based on their recent configuration. Changes in the organizational structures of universities up to 2016 have been taken into account. For example, in the Leiden Ranking 2017, Grenoble Alpes University encompasses all publications previously assigned to Joseph Fourier University, Pierre Mendès-France University, and Stendhal University.

A key challenge in the compilation of a university ranking is the handling of publications originating from research institutes and hospitals affiliated with universities. Among academic systems, a wide variety exists in the types of relations maintained by universities with these affiliated institutions. Usually, these relationships are shaped by local regulations and practices affecting the comparability of universities on a global scale. As there is no easy solution for this issue, it is important that producers of university rankings employ a transparent methodology in their treatment of affiliated institutions.

CWTS distinguishes three different types of affiliated institutions:

  1. Component
  2. Joint research facility or organization
  3. Associated organization

In the case of a component, the affiliated institution is actually part of or controlled by the university. Universitaire Ziekenhuizen Leuven is an example of a component, since it is part of the legal entity of Katholieke Universiteit Leuven. A joint research facility or organization is the identical to a component except that it is administered by more than one organization. The Brighton &; Sussex Medical School (the joint medical faculty of the University of Brighton and the University of Sussex) and Charité (the medical school of both the Humboldt University and the Freie Universität Berlin) are examples of this type of affiliated institution.

The third type of affiliated institution is the associated organization, which is more loosely connected to a university. This organization is an autonomous institution that collaborates with one or more universities based on a joint purpose but at the same time has separate missions and tasks. In many countries, hospitals that operate as teaching or university hospitals fall into this category. The Massachusetts General Hospital, one of the teaching hospitals of the Harvard Medical School, is an example of an associated organization.

The Leiden Ranking 2017 counts a publication as output of a university if at least one of the affiliations in the publication explicitly mentions either the university or one of its components or joint research facilities. In a limited number of cases, affiliations with academic hospitals that are not controlled or owned by the university are also treated as if they were mentioning the university itself. The rationale for this is that in some cases academic hospitals – although formally being distinct legal entities – are so tightly integrated with the university that they are commonly perceived as being a component or extension of that university. Examples of this situation include the university medical centers in the Netherlands and some of the academic health science systems in the United States and other countries. In these cases, universities have actually delegated their medical research and teaching activities to the academic hospitals and universities may even no longer act as the formal employer of the medical researchers involved. In other cases, tight integration between a university and an academic hospital may manifest itself by an extensive overlap in staff. In this situation, researchers may not always mention explicitly their affiliation with the university. An example of this tight integration is the relation between the University Hospital Zurich and the University of Zurich.

The list of academic hospitals that have been treated as a component of a university for the 2017 edition is available here. Inevitably, some degree of arbitrariness is involved in the decision to treat an academic hospital as a component even though it constitutes an independent legal entity. We have discussed this in more detail in a blog post.

Affiliated organizations that are not classified as a component or a joint research facility or treated as such are labeled as associated organizations. In the case of publications with affiliations from associated organizations, a distinction is made between publications from associated organizations that also mention the university and publications from associated organizations that do not include a university affiliation. In the latter case, a publication is not considered to originate from the university. On the other hand, if a publication includes an affiliation from a particular university as well as an affiliation from an associated organization, both affiliations are considered to represent that particular university. The effect of this procedure depends on the counting methodthat is used in the calculation of bibliometric indicators. The procedure influences results obtained using the fractional counting method, but it has no effect on results obtained using the full counting method.

Selection of universities

The Leiden Ranking 2017 includes 903 universities from 54 different countries. These are all universities worldwide that have produced at least 1000 Web of Science indexed publications in the period 2012–2015. Only so-called core publications are counted, which are publications in international scientific journals.

Also, only research articles and review articles are taken into account. Other types of publications are not considered. Furthermore, collaborative publications are counted fractionally. For instance, if a publication includes five authors of which two belong to a particular university, the publication is counted with a weight of 2 / 5 = 0.4 for that university.

It is important to note that universities do not need to apply to be included in the Leiden Ranking. The universities included in the Leiden Ranking are selected by CWTS according to the procedure described above. Universities do not need to provide any input themselves.

Data quality

The assignment of publications to universities is not free of errors, and it is important to emphasize that in general universities do not verify and approve the results of the Leiden Ranking data collection methodology. Two types of errors are possible. On the one hand, there may be false positives, which are publications that have been assigned to a university when in fact they do not belong to the university.

On the other hand, there may be false negatives, which are publications that have not been assigned to a university when in fact they do belong to the university. The data collection methodology of the Leiden Ranking can be expected to yield substantially more false negatives than false positives. In practice, it turns out to be infeasible to manually check all addresses occurring in Web of Science. Because of this, many of the 5% least frequently occurring addresses in Web of Science have not been manually checked. This can be considered a reasonable upper bound for errors, since most likely the majority of these addresses do not belong to universities.

The following updates and corrections have been made to the CWTS Leiden Ranking.

June 21, 2017

  • The update of the 2017 edition of the Leiden Ranking that took place on June 19 has led to some minor errors in the indicators for a small number of universities. These errors have been corrected.

June 19, 2017

Update of the 2017 edition of the Leiden Ranking. The following corrections have been made:

  • Ulsan National Institute of Science and Technology in South Korea has been added to the ranking. This university incorrectly was not included. As a result of adding this university to the ranking, the total number of universities included in the ranking has increased from 902 to 903.
  • The indicators for Norwegian University of Science and Technology have been corrected. The merger with the university colleges of Sør-Trøndelag, Ålesund, and Gjøvik incorrectly had not been taken into account.  – The indicators for Arizona State University in the US have been corrected.
  • Publications from some campuses incorrectly had not been assigned to Arizona State University.  – The indicators for Université Grenoble Alpes in France have been corrected. Some minor corrections have been made to the publication data for this university.

May 17, 2017

Release of the 2017 edition of the Leiden Ranking. The following changes have been made compared with the 2016 edition:

  • Indicators on collaboration with industry. These indicators have been reintroduced in the Leiden Ranking. They were available in earlier editions of the ranking, but they were not included in the 2016 edition.
  • Fractional counting method. In earlier editions of the Leiden Ranking, address-level fractional counting was used. In the 2017 edition, we have switched to author-level fractional counting. In the case of address-level fractional counting, each address in the address list of a publication has equal weight. In the case of author-level fractional counting, on the other hand, each author of a publication has equal weight. To illustrate the difference between the two fractional counting approaches, consider a publication has been co-authored by five researchers. Three researchers are affiliated with university X. The other two researchers are affiliated with university Y. The three researchers affiliated with university X belong to two different departments within the university. Both departments are listed as a separate address in the publication. The two researchers affiliated with university Y belong to the same department within the university. In the case of address-level fractional counting, used in earlier editions of the Leiden Ranking, the publication is assigned to universities X and Y with weights of 2 / 3 = 0.67 and 1 / 3 = 0.33, respectively. In the case of author-level fractional counting, used in the 2017 edition of the Leiden Ranking, the weights with which the publication is assigned to universities X and Y are, respectively, 3 / 5 = 0.6 and 2 / 5 = 0.4.

May 18, 2016

Release of the 2016 edition of the Leiden Ranking. The following changes have been made compared with the 2015 edition: 

  • Selection of universities included in ranking. In the 2015 edition of the Leiden Ranking, the 750 universities worldwide with the largest Web of Science indexed publication output were included. In the 2016 edition, we use a different approach to select the universities that are included in the ranking. Rather than selecting a fixed number of universities, we include all universities worldwide whose publication output is above a fixed threshold. This threshold equals 1000 fractionally counted Web of Science core publications in the period 2011–2014. Using this threshold, 842 universities have been selected for inclusion in the 2016 edition.
  • Academic hospitals. The treatment of some academic hospitals has changed in the 2016 edition of the Leiden Ranking. In earlier editions, publications mentioning an affiliation with an academic hospital that is part of or controlled by a university were assigned to the university. In addition, publications from an academic health science center to which a university delegates its medical research and teaching duties were also assigned to the university. In the 2016 edition, publications from academic hospitals that do not satisfy the above criteria but that are nevertheless very tightly integrated with a university are also assigned to the university. Researchers that work at these hospitals but that are employed by the university often turn out not to mention their affiliation with the university. We assume that researchers do not mention their university affiliation because the hospital is perceived to be part of the medical faculty of the university.
  • Inter-institutional collaboration indicators. In the 2015 edition of the Leiden Ranking, a collaboration between a university and an affiliated organization that is considered to be part of the university was regarded as an inter-institutional collaboration. For instance, a collaboration between Leiden University and Leiden University Medical Center was seen as an inter-institutional collaboration. In the 2016 edition, these collaborations are no longer regarded as inter-institutional collaborations.
  • International collaboration indicators. In the 2015 edition of the Leiden Ranking, England, Northern Ireland, Scotland, and Wales were regarded as separate countries in the calculation of the international collaboration indicators. In the 2016 edition, the United Kingdom is regarded as a single country in the calculation of the international collaboration indicators. Hence, a collaboration between for instance an English and a Scottish university is no longer seen as an international collaboration.
  • Indicators on collaboration with industry. These indicators are no longer available in the Leiden Ranking. Indicators on universities’ collaboration with industry will be published separately in 2017.
  • Website. The Leiden Ranking website has been significantly revised. In addition to a traditional list-based presentation, where universities are presented in a list ranked based on a selected indicator, the revised website also offers a chart-based and a map-based presentation. Furthermore, in the list-based presentation on the revised website, size-dependent and size-independent indicators are always presented together and the P indicator has replaced the PP(top 10%) indicator as the default indicator for ranking universities.

More information

More information on the Leiden Ranking methodology can be found in a number of papers published by CWTS researchers. A detailed discussion of the Leiden Ranking is presented by Waltman et al. (2012). This paper relates to the 2011/2012 edition of the Leiden Ranking. Although not entirely up-to- date anymore, the paper still provides a lot of relevant information on the Leiden Ranking.

The algorithmic approach taken in the Leiden Ranking to define scientific fields is described in detail by Waltman and Van Eck (2012). Field normalization of impact indicators based on algorithmically defined fields is studied by Ruiz-Castillo and Waltman (2014). The methodology adopted in the Leiden Ranking for identifying core publications and core journals is outlined by Waltman and Van Eck (2013a, 2013b). Finally, the importance of using fractional rather than full counting in the calculation of field-normalized impact indicators is explained by Waltman and Van Eck (2015).

Bibliography

  • Waltman, L., Calero-Medina, C., Kosten, J., Noyons, E.C.M., Tijssen, R.J.W., Van Eck, N.J., Van Leeuwen, T.N., Van Raan, A.F.J., Visser, M.S., &; Wouters, P. (2012).
  • The Leiden Ranking 2011/2012: Data collection, indicators, and interpretation. Journal of the American Society for Information Science and Technology, 63(12), 2419–2432. (paper, preprint)
  • Waltman, L., &; Van Eck, N.J. (2012). A new methodology for constructing a publication-level classification system of science. Journal of the American Society for Information Science and Technology, 63(12), 2378–2392. (paper, preprint)
  • Waltman, L., &; Van Eck, N.J. (2013a). Source normalized indicators of citation impact: An overview of different approaches and an empirical comparison. Scientometrics, 96(3), 699–716. (paper, preprint)
  • Waltman, L., &; Van Eck, N.J. (2013b). A systematic empirical comparison of different approaches for normalizing citation impact indicators. Journal of Informetrics, 7(4), 833–849. (paper, preprint)
  • Ruiz-Castillo, J., &; Waltman, L. (2015). Field-normalized citation impact indicators using algorithmically constructed classification systems of science. Journal of Informetrics, 9(1), 102–117. (paper)
  • Waltman, L., &; Van Eck, N.J. (2015). Field-normalized citation impact indicators and the choice of an appropriate counting method. Journal of Informetrics, 9(4), 872–894. (paper, preprint)
  • Olensky, M., Schmidt, M., &; Van Eck, N.J. (2016). Evaluation of the citation matching algorithms of CWTS and iFQ in comparison to Web of Science. Journal of the Association for Information Science and Technology, 67(10), 2550–2564. (paper, preprint)
  • Waltman, L., Tijssen, R.J.W., &; Van Eck, N.J. (2011). Globalisation of science in kilometres.Journal of Informetrics, 5(4), 574–582. (paper, preprint)