We live in an age of accountability. Public expenditure must be measured in terms of its impact and effective deployment in order to satisfy the requirements of democratic states (RANSON, 2003)). One area in which this has had noticeable impact is in higher education, with institutions under pressure to demonstrate effective deployment of resources in a way that has real impact upon society. Because of this, public evaluation exercises have taken a central role in research policy steering over the past decade, nowhere more so than in the rise of rankings and league tables . In general, this development is to be supported, as it allows higher education institutions to compare and measure performance in a way impossible in previous generations, facilitating a convergence of global institutions and a process of learning by shared experience. However, the nature of the hierarchical ranking has sparked a global ‘reputation arms race’, (HAZELKORN, 2009) encouraging the adoption of ranking-improving policies rather than institution-improving policies, and attempting to reinforce the hegemony of the established order.
However, as Altbach (2011) states, rankings are now an unavoidable social fact of the modern HE landscape, they cannot be ignored or escaped, and to some extent, they provide essential insight into the performance of a university compared to its international peers.
The ranking approach
For all the increasingly central role rankings and assessment play in global higher education, research into them is still relatively recent and fairly scarce (FRANÇA, 2015; CALDERÓN 2015) The bulk of theoretical work carried out in Latin America, meanwhile, has tended to play down their importance and underline geographical and linguistic biases, while underlining the dominance of the Anglo-American model of university in these assessments. They have tended to emphasise the fact that many of the most popular rankings are commercial enterprises and therefore reflective of an institutionalisation of higher education as a market (RASETTI, 2010; ORDORIKA, 2011; MOURA; 2013).
Morosini (2013) rightly affirms that while quality can be defined across a a global context to a limited extent, this should not be at the cost of attending to the context of the university in an emerging context, meaning that we should be always attentive to both sides of the global/local dynamic.
There have also been some more practical literature in Brazil engaging with rankings. Beuren (2014) produced a combined performance metric for UFRGS with practical proposals for institutional milestones, Axel-Berg (2015) assessed the impact and potential ramifications of rankings on USP, and Righetti (2015) looked at the social impact of rankings on São Paulo state universities, concluding that ranking position and prestige has most impact on upper middle class Brazilian students. Dos Santos (2015) carried out a bibliographic study of Brazilian institutions and their insertion into rankings, concluding that while increased participation in international research projects had facilitated Brazilian institutions’ insertion into rankings, this has not translated into improved positioning on those rankings.
The national and international higher education landscape in which Brazilian higher education finds itself at the beginning of the twenty first century is qualitatively different to that it found itself at its foundation at USP’s formation in 1934, its first reform in 1964 or even its second in 1987. The vast increase in both global and domestic enrolment rates means that higher education is no longer the preserve of a small social elite, but a truly public endeavour with a role in creating a knowledge society. The critical public sphere that the university takes a role in fostering has corresponding effects on the institution; the demand for public accountability of the institution presents itself as a new imperative for the institution, whereby the old, school or faculty level ‘elitist’ forms of evaluation are no longer sufficient to respond to societal demand.
On a global level, the institution finds itself inserted into a system that is more globalised, its interlinkages and networks of communication and research are denser than ever before. It has been said that global higher education is on the verge of a digital moment’ of significance comparable to the effect on publication and information production of the spread of the printing press. This change comes as a result of the vastly increased development of communication technology, from email and the ready availability of conference calling technology, to online publishing and archiving sources and open access collaborative platforms, the internet has allowed the spontaneous and instantaneous collaboration and dissemination of research across national, social and disciplinary boundaries to confront the complex issues facing global and local society (BERNHEIM, 2008; OLSBO, 2013). This is evident from the proliferation of international student mobility, international and transdisciplinary collaboration and the explosion in the quantity of published material available.
This means that the modern university of excellence no longer exists in a vacuum, it can no longer simply reflect state goals or remain inward looking, or take an apolitical, detached view of its activities, like a scientist in a laboratory (HOHENDAHL, 2011; LATOUR, 1987). This nexus of issues, identified as the transition from an institution operating within an elite system to a mass system means that existing administrative and evaluation systems can no longer cope with the running of the institution (TROW, 1973). It is this backdrop of public discourse and demand for accountability and the reality of a deterritorialised research landscape, that the rise of global university rankings can be understood.
As this rise is often conflated with broader processes of globalisation, it has often been understood in terms of economic globalisation and the intrusion of market logic and corporate governance into an area where it is inappropriate. While in part this is true, and many of the current rankings reflect powerful capital interests in a way that is not desirable, globalisation is not one unified phenomenon with unified inputs, outputs, goals and steering, rather it points to a fragmentation and differentiation process that is non-linear in character (APPADURAI, 1996). While economic globalisation may be a worrying factor for many, the exchange of knowledge and capabilities higher education can facilitate on a global level reflects the cosmopolitan, solidarist aspect of humanity at perhaps its most complete level (ARCHIBUGI, 2004; LINKLATER, 1998). The ability to compare and share experiences on an international level is vital to the success or otherwise of higher education systems, allowing a collaborative and cooperative approach to policy formation and learning through shared experience. Rankings are now an inevitable social fact of twenty-first century higher education, universities cannot make them disappear by ignoring them (SALMI; ALTBACH, 2011).
What are Rankings?
All types of evaluation or criticism are made in light of a critical standard or ideal. Ranking systems are no different in this respect. When we criticise the government for its healthcare policy, for example, we generally have an idea in mind of what a healthcare system ought to be like; it should be efficient and expedient, it should have up-to-date facilities and drugs, it should be accessible to the whole population and its doctors should be well trained, for example. In the literature this is frequently, and somewhat erroneously labelled the epistemic limitation of rankings. In a hierarchical structure such as a ranking table there is the implication that the top, number one, is the best or closest form of the ideal object in question, while those trailing are less so, because of this it is preferable to label this type of limitation as normative in nature rather than epistemic. It is not so much a problem of how to interpret knowledge and information (which would be an epistemic problem) as much as how the use of a critical standard affects rankings exercises.
This is a characteristic limitation of all hierarchical league tables, in music sales charts, for example, where who has sold most units and therefore is most popular is measured, as opposed to any judgments about artistic merit. Simple quantitative measures are well suited to relatively simple phenomena, but when they are translated for use to very complex phenomena, like the life of a university, in accordance with Campbell’s law derived from the use of standardised testing in high school systems, they begin to have distortive or deforming effects upon the phenomena they are designed to measure, and subsequently increase the incentive to cheat the metric CAMPBELL, 1976).
Rankings have the effect of turning vast, multifaceted complex institutions into one dimensional representations. This collapse into one dimensionality is characteristic of all hierarchical rankings, whether they be measures of academic excellence, social inclusion or sustainability. They always necessarily exclude some part or other of a university ́s function, identity or reality, and always reduce the many qualitative phenomena required to make judgments about performance into simple quantitative data. It is with this in mind that we should be wary of any of the claims of total representation of quality from a ranking, as there is necessarily something lost in reading universalised statistics (LO, 2014; RASETTI, 2010; ALBORNOZ & OSORIO, 017, NEVES; 2012).
This one-dimensionality affects all university rankings, and so tends to detract from nations’ attempts to formulate alternative rankings when dissatisfied with the outcomes of the mainstream rankings, as Russia did in publishing its own Global University Ranking from RatER which placed Moscow State in fourth, higher than Harvard or Cambridge as a result of its capacity to place graduates in high ranking executive positions (BATY, 2010). For reasons to be detailed below, for critics of the central-periphery structure of rankings or the implicit soft power relations they embody (LO, 2011), rewriting rankings to favour local priorities and redefining global excellence to favour one’s own system is not how nations should approach this idea, they should seek to challenge the very notion of hierarchical league rankings, which presuppose a meta-language of research and uniformity of goals (ERNE; 2007).
There are many different variables that could be used to measure what makes a good university; number of trees planted by the administration, wheelchair accessibility, the quality of student life and accommodation or graduate employability scores. All of these clearly reflect something of the Idea of a university, held to some standard of what we feel a university should be. The way in which we choose to rank universities should reflect something of what we think a university ought to be. It is very important to recognise this normative aspect to ranking metrics; we use them as a proxy for excellence, a way of quantitatively measuring how close, or far, an institution is to a set ideal of what a World Class University is.
For the vast array of information that could be recorded about a university, it is a comparatively narrow set of criteria that is used in international rankings, even when compared to domestic rankings (STEINER; 2010). Given the enormous diversity of institution types and structures, priorities in research and learning outcomes and academic traditions there is not really a corresponding diversity in ranking metrics. The weighting is overwhelmingly, although not entirely, given over to research and academic production, rather than teaching or student experience. This suggests that at least when it comes to our global institutions, there is a degree of consensus present in what makes a university not merely locally dominant, but of global importance (HAZELKORN, 2013).
As Altbach (2004) stated of the World Class University; everybody wants one, but no one is quite sure what one is. It was in the search to define and construct one in China that the first Shanghai Jiao Tong was created (LIU & CHENG, 2013). Because of this a high ranking position is often used as a proxy for World Class status as it is the main aim, at least of Shanghai Jiao Tong, to define those universities at the very top of the global research pyramid.
In the present third wave of composite rankings, the view is complicated by three additional factors; the use of ordinal numbering, whereby statistically insignificant variations in measurement come to have disproportional effects; the difference in position lower down rankings is often so small as to be statistically irrelevant, but can create variations of hundreds of ranking positions (BOOKSTEIN; 2010), and the composite nature of the rankings, which means that by design, rankings can only favour one type of institutional profile rather than allowing for the possibility that more than one set of institutional goals or models can produce excellence. The scores reported in the three focussed on in this review are not absolute values, as they are often read, but scores produced in relation the top ranking institution (RAUHVARGERS, 2011).
This means that rather than being a progress report as such, it is possible for an institution to improve its performance in some aspect of the ranking, but still suffer a loss of points because the top institution progressed at a faster rate. This gives the impression of a zero-sum competition rather than the idealistic view of the ready and free exchange of information described to which we ought to subscribe (ORDORIKA; LLOYD, 2013).
Composite rankings also combine series of data that are not easily reconciled into one composite score. When governments, or international organisations measure information there is a clear idea of what kind of measurement is taking place. Inputs are measured compositely as the sum of a variety of inputs, efficiency or productivity is measured by pitting the ratio of inputs to outputs, results as the expected outputs, and impacts as an aggregation of the effects of the results. When measuring data we can look at inputs, outputs, intermediate qualities, efficiency measures, and impacts.
In order to have a coherent composite score we should not mix these types of data. The human development index, for example, is composed quite simply from the mean life expectancy at birth, expected number of years of schooling and GNI per capita. This gives a broad impression of the level of human development in a society, and can facilitate conversations on why it is higher in some countries than others where GNI per capita is equal. It does not measure levels of democratic participation, empowerment, liberty, security or inequality, nor are there any plans to expand it to cover these as they are problems of different orders, or contextually dependent. It is simple, transparent and easy to use, but for a representative picture of a population, it must be used in conjunction with a variety of other indicators.
A clean measure should also always avoid double counting problems, wherein the same data are represented multiple times within the score, meaning that they suffer from a multiplier effect. In this, many aspects of rankings are counting the same phenomena repeatedly in adapted ways. Impact, combined with prestige and output into one score is counting the same issue multiple times in subtly different ways. What we see in composite rankings is a mixture of inputs (industrial resources, faculty to student ratio, and increasingly highly cited researchers), productivity scores papers per faculty member, outputs number of papers indexed on a specific source), impacts (highly cited papers) and intermediate qualitative information reputational surveys). From this cocktail, it becomes difficult to say what a composite university ranking score is actually a measurement of, because it is not purely of inputs, outputs, intermediate factors or impacts. Instead, according to ranking agencies, rankings measure ‘quality’, or ‘excellence’, neither of which are coherent or conceptually unified terms for which it is possible to create a single metric.
The time span of measurement for rankings is also problematic, it not clear that an annual cycle is the most appropriate institutional change in a university, which is generally slow moving, and can take years to show the full consequences of decision making or change. For example, if Amazon deforestation were measured hourly, over the course of a week, there could probably be some interesting conclusions to be drawn over behaviour of actors involved in deforestation; preferring to remove large trees under the cover of darkness, for example, but very little of use that could be extrapolated about overall trends in deforestation. Because of this deforestation is measured over a larger scale that allows the accurate mapping of change.
Universities, by their nature and governance structures, tend to be durable but slow moving institutions, with the constant satisfaction and cooperation of a variety of stakeholders and actors needed to effect change. Research programs and departments can take years to begin operating to their potential, and major changes in policy do not typically have instant effects, but generate much longer term results. By publishing annually then, rankings are in danger of capturing statistically insignificant fluctuations and natural variations in output rather than long term changes in performance.
Rankings may be able to capture a series of snapshots of university performance, but are unable to describe long term change in a meaningful way. The reason for this cycle is that measurement is not tailored to academic cycles, however long they may be, but to business cycles, which demand annual publication in order to generate revenue for publishers.
In higher education, any system of evaluation can only grasp a small portion of the whole with any level of accuracy. Of the 17,000 higher education institutions in the world today, about one thousand are evaluated by global rankings, and five hundred are actually listed, and only a hundred have their scores recorded. This means that these rankings are designed to measure a tiny percentage of global academic activity; considered the absolute elite.
Transparency
The main thing that rankings have been instrumental in fostering in higher education is a culture of public transparency in university administration (Sowter, 2013).
Following both Habermas (1987) and article 2e of the UNESCO declaration, the modern university, while fully autonomous from the nation state, must be fully accountable to its population. Rankings offer concrete public pressure for institutions to be more transparent in recording their information, and allow for the contextualisation of the information recorded. This means that instead of being presented with a list of numbers in isolation, it is possible to compare how these numbers sit in comparison with universities of similar profile internationally. For an institution like USP this is especially important, as its dominant position on the Brazilian educational and cultural landscape means that it tops national rankings comfortably each year, and can be led into complacency by national comparison (SCHWARTZMAN, 2005).
World Class Paradigm
In an attempt to solve Altbach’s problem, many commentators have outlined what it really means to be “World Class” as an institution in the twenty- first century. The phrase liberally graces most universities of stature’s prospectuses, and administrative and strategic debate is littered with its completely unreflective use. This has led a number of commentators to suggest that the label is little more than a marketing exercise in the neo liberal global ‘market’ for education; something to entice prospective students and parents without any real substance or significance. Some have even gone so far as to suggest that the entire project, alongside ‘internationalisation’ is chimerical (HALLIDAY, 1999), a delusional project with little meaning outside of the increasing efforts made by universities to market themselves to attract endowments and more high paying students. This means that in the ‘race for reputation’ (HAZELKORN, 2013; ORDORIKA, 2010), those who are ‘World Class’ are those who come top of the rankings, both in bibliometric impact factors and in reputational surveys.
Salmi rightly associates the emergence of rankings as a response to the question of precisely what and where World Class Universities are, allowing a more open and objective conversation about the idea. Salmi identifies three main components to a world-class university, each of which we see, to some extent, reflected in rankings metrics (SALMI, 2009):
- A concentration of top talent among academic staff and students. This means the ability to choose the best and brightest both locally and internationally. Increasingly there is an acceptance that the internationalisation of staff and student bodies is crucial in the definition of a world class, and not merely good, university.This is somewhere that Latin American universities struggle with. While, as Salmi says, there are concentrations of talent within the university, Latin American universities tend to favour much more diversity than the World Class paradigm would generally allow, this in turn affects some ranking positions for Brazilian universities. The state universities however, have a phenomenal concentration of regional talent as a result of its regionally dominant position, putting it in good position to assume World Class status.
- An abundance of resources is necessary due to the huge cost of building and maintaining a top comprehensive research-intensive university. Brazilian public universities are almost entirely publicly funded, and the recent crises have bought this issue to the fore once again.It is not within the remit of this review to discuss financing, although it is notable that both Salmi’s account and the UNESCO Declaration point to the need to revolutionise and rethink financing of higher education, as the costs of maintaining top research and top researchers becomes ever higher. However, the state of São Paulo, through huge FAPESP funding resources is an anomaly in the Latin American context, being both freer of federal dependency than the federal universities of other states and with funding streams closer to OECD averages than to other states in Brazil.
- Finally, World Class universities need favourable governance and a high level of institutional autonomy in order to have the ability to steer policy in a responsive and effective way. This applies to the way in which they are funded and the way in which they are administrated.The presence of large-scale bureaucracy seriously inhibits the decision making process, and means that universities cannot formulate and steer research policy effectively. It also specifically states that both academic freedom and institutional independence must be protected at all costs. State intervention in higher education policy has proven to be largely disastrous or counterproductive in higher education. It is in these two aspects that, according to Salmi, USP has limiting factors that prevent it from joining the top table.The presence of an ‘assemblist’ model of governance means that it is excessively and inappropriately democratised and overtly susceptible to party political pressures. This means that executive decision making is constrained by the conservative factions of its teaching staff, and excessively bureaucratic. At the federal level, the presence of evaluation ill-suited to the evaluation of a leading research intensive university, focussing on quantity rather than quality produces incentives on a national level not replicated either in the university’s own strategic goals or in international metrics. These governance hurdles urgently require addressing if USP is to fulfil its potential.
From these three key points, Salmi then draws out the key characteristics of a World Class university:
- International reputation for research and teaching.
- The presence of research superstars and world leading researchers.
- A number of World Class departments.
- Generates innovative ideas and groundbreaking research, recognised by prizes and awards.
- Attracts the best students and produces the best graduates.
- Is heavily internationalised in its recruitment activities and research production.
- Has a sound financial base and a large endowment income with diversified financial base.
Salmi gives a very clear picture of the key determinants of a World Class university, and the challenges for the state universities, although he ignores the role of geopolitics in his analysis; as the BRICS are slowly discovering, simply investing the money, recruiting the academics and copying the policies of the US is not supplying the results expecting at the speed expected. There are variants on the institutional model depending on political context and relationship of the state to society, but the dominance of the Anglo-American model in definitions in the World Class discussion is pronounced (CHENG; LIU, 2006).
Institutional Responses to Rankings
The Rankings in Institutional Strategies and Processes (hereafter RISP) study (HAZELKORN, 2014) found that 93% of respondents in ranked European universities monitor ranking performance. Of these, 60% have specifically dedicated human resources for this purpose, either alone or tied to strategic planning or data collection departments. 85% of these reported directly to the rector. 39% of respondents have made specific institutional, managerial or academic decisions informed by rankings, with a further third planning to do so, while only 29% said that ranking performance has absolutely no impact on decision-making. This reflects a widespread recognition of the value of improved ranking position as a tool for peer comparison, talent and resource attraction and the spread of good governance practice.
Daraio and Bonaccorsi (2016) have identified two emerging trends from this scenario; granularity of data and cross-referencing of data in the European Union which as a result of standardisation of data captured within European institutions is allowing the sharing of good governance, allowing a conequent better orientation towards rankings, whilst giving universities the capacity to look onwards to emerging trends and better research insight.
It is doubtless that rankings have played a serious role in increasing strategic deployment of resources, although for universities in the United States and the United Kingdom, they have predominantly built upon governance structures developed from the middle of the 20th century in response to rapidly growing higher education systems (REICHARD; 2012). The development and typology of these offices have been broken down by Volkvein (2008, 2012) into a four part ecology; craft structures contain one or two part time volunteers usually pulled from faculty. Adhocracies are composed of informal research groups formed for specific projects according to institutional need.
Where the institution sees a need to implant the capacity for this type of research more permanently, these adhocracies evolve into professional bureaucracies with rigid structures and permanent specialists. The final configuration is termed elaborate profusion, where each individual dean and provost has their own dedicated team, meaning that longitudinal, institution-wide research is usually done by a single researcher, rather than a dedicated team.
Chirikov (2013) relates the experience of forming an institutional research intelligence office in a Russian university, pointing to the vital importance that these units have played in aspiring world class universities in their presentation of information to rankings. Cheslock and Kroc (2012) discuss the fact that this is especially important where national statistics are incomplete, unreliable, or not sufficiently oriented to what the ranking bodies are asking for. While they raise the ethical problem of the possibility of gaming rankings, for countries with less sophisticated higher education data reporting systems, these units have become essential ‘plane levellers’ to ensure that individual institutions do not suffer excessively from the influence of inadequate macro-governance structures of ministries.
Carranza (2010) points out that despite the proliferation of these activities in Asia, MENA and Russia, Latin American universities have been slow or even reluctant to develop their capacities into formal activities. This means that they have been placed at a competitive disadvantage in ranking performance, and therefore in prestige. This in turn has an effect on the university’s transparency, ability to form high-impact research groups and make research driven decisions (CHIRIKOV; 2013).
Conflict with the UNESCO Declaration on Higher Education for the Twenty First Century UNESCO’s 1998 World Declaration on Higher Education for The Twenty First Century: Vision and Action lays out general principles for the development of higher education suited to the twenty first century. It does this on the basis of the unprecedented increase in demand and diversification of the sector visible in the movement from elite to mass systems of higher education, and in recognition of the fact that in accordance with the Universal Declaration on Human Rights Art. 26 paragraph 1 that ‘Everyone has the right to education’. It emphasises the need to open and fair access to higher education (art. 3), something which all of the major rankings have pointedly disregarded in favour of promoting very selective admissions policies (SALMI, 2009; ORDORIKA & LLOYD, 2011).
It also promotes the advancement and dissemination of research (art. 5), a long-term commitment to research relevant to the societal context in which it is located (art. 6), again something disregarded by both composite and bibliometric rankings (ibid.).
Perhaps most importantly, to qualitative evaluation of excellence in universities (art. 11). This section specifically states that ‘due attention should be paid to specific institutional, national and regional contexts in order to take into account diversity and to avoid uniformity’ (art. 11a). This clause would appear to specifically discount the value of a global ranking if we agree with Marginson and Pusser’s argument that rankings continually promote and encourage one model of university and one type of excellence regardless of context, which also finds problems in article 8a, which promotes a diversity of models to satisfy the greatly diversified demand for higher education. This observation has been reinforced by Steiner’s (2010) meta-analysis of the main ranking methodologies, finding that they are, by and large, uniform processes with surprisingly little variation between them. Key determinants are measured in broadly the same way by all three of the main rankings, and it is because of this it tends to be the same cabal of winners at the top.
Rankings exercises do not fit especially well with the aims set out by UNESCO for the future of higher education, and yet, to co-opt the language of global governance, they have become extremely powerful informal agenda setters in the higher education landscape, with some 90% of executive decision makers confessing to making policy decisions in order to obtain better ranking position (GIACALONE, 2009; HAZELKORN, 2009). Some commentators have chosen to explain this in the psychological terms of the urge to compete and compare (SOWTER, 2013), while others have chosen to equate the exercise with the increasing incursion of market logic in the higher education sector (MARGINSON, VAN DER WENDE; 2008), or an abadonment of the principles of scientific investigation itself (GIACALONE, 2009). These are all contributing factors to some extent, but this analysis would like to take into account the structural and historical factors that explain why ranking exercises have gained such importance, and also where these impulses may be used alongside the preservation of institutional, national and multilateral goals.
Art.2e states that universities must ‘enjoy full academic autonomy and freedom, conceived of as a set of rights and duties, while being fully responsible and accountable to society.’ It is within this concept that the value of rankings may show itself; in order to be fully accountable to a society, a university must be able to show what it has produced in return for its investments, and is contributing to society. These two points, although separate are fundamentally interlinked (GIBBONS et al.; 2003). In other words universities must be able to show the impacts produced by research stemming from the input of public funding. The ability to do this across national contexts allows decision makers and populations compare and contrast the impact of research much more easily allows local populations assess the quality of university in a broader and better informed way than would be possible without them.
No Unified measures of quality
The conception of a modern research university with social responsibility and a mission to contribute to the formation of knowledge societies points to serious consequences for the possibility of constructing universal evaluation; if research is produced within the context of application, it is its response to this context that forms the measure of impact. This means that there is no universal measure of impact, nor of quality. Instead we look to a conjunction of overlapping partial indicators (MARTIN; IRVINE, 1980), many indicators explain some aspect or facet of quality, no single quantitative measure is the key universal. The consequence for ranking exercises is serious; if we can only point to a matrix of partially overlapping indicators of quality, then any ranking can show at best a partial picture.
Recent years have seen the rebirth of qualitative public evaluation exercises, such as the Research Assessment Exercise in the United Kingdom, and the Research Quality Framework (RQF) in Australia, using citation information as a guide but considering a broader range of factors. Response to these has been extremely mixed, with many researchers concerned about feeling pressure to make their research ‘accessible’ and ‘marketable’ as opposed to innovative or useful, but they are indicative of a broader trend within policymaking away from such blunt measures of impact.
Rankings, however, do not incorporate such considerations as a result of the difficulty of incorporating qualitative judgments of impact into quantitative studies. The original ISI index created by Eugene Garfield in 1963 was designed to describe this academic network and assess variations in publication patterns across national and disciplinary cultures. For this, it still plays an invaluable part of any understanding of academic impact, however its use in composite rankings as the only measure of impact begins to distort publication behaviour by only rewarding certain kinds of publication. Furthermore, although it can consider variations in average citation rates across disciplines, it has a major problem with research that does not fit into any one discipline. In this way existing bibliometric methodologies can be a very good measure of endogenous, Mode 1 type research, but not for Mode 2, heterogeneous growth of research (GIBBONS et al.; 1993).
Until very recently, the only way in which impact was quantitatively assessable in this way was through citation counts, but with the rise of the internet we now have the ability to conceive of dispersion and consumption in a much more holistic manner, with the ability to measure internet ‘traffic’ through hits and downloads, and social network analysis (HOFFMAN;2014) which can give a measure of research’s social impact, measure spread and citation density rather than simple counts through the use of Eigenfactors, and map research around topics, tags and broad themes that interest a variety of disciplinary backgrounds (BORNMANN;2014). This means that today, journal impact factor (JIF) and citation rates are just one among many measures of research impact. Their use as the only one suggests that many rankings prefer to measure what is convenient to measure rather than what is actually informative (ORDORIKA; 2008).
Reputational Surveys and their Limitations
Both the QS (O’LEARY; INCE; AL HILI, 2010) and, to a slightly lesser extent the THE rely on a reputational survey of academic and research quality. This can be seen as an attempt to introduce a qualitative judgment element into the predominantly quantitative dimensions of bibliometric rankings. This aligns itself better with the observation that there is no one indicator of quality; rather it is a multifaceted phenomenon with an unstable definition. In this respect, it is seems desirable to allow professionals make their own qualitative judgments on elite institutions and aggregate their responses in this way.
However, reputational rankings have become the most contentious of all metrics in academic circles, for two broadly related areas of complaint; the normative/power dimensions implicit in the measurement of status relations, and the not inconsiderable methodological limitations of such an assessment. Altbach goes as far as to observe that the more weighting given to reputational surveys, the less we may depend upon the ranking as a policymaking tool (BOWMAN; BASTEDO, 2011; MARTINS, 2005; SAFÓN, 2013; SALMI; ALTBACH, 2011).
The rise of bibliometrics as a method of evaluation closely corresponds to a need to overcome the cronyism that often blights qualitative and departmental or school level evaluation. It gives a way that public observers armed with a little specialist knowledge, may draw conclusions and make judgments about the productivity and impact of research.
Reputational rankings tend to favour a certain profile of institution. 96 Universities with strong overseas recruitment and delivery profiles usually fare better than institutions that focus more on domestic delivery. This is why the rankings appear to favour Australian universities more heavily than Canadian ones, despite the fact that in terms of systemic health and academic profile Canadian institutions are arguably slightly stronger.
It is especially telling that, in response to Gladwell’ s New Yorker editorial on the relative uselessness of reputational rankings (GLADWELL, 2011), the head of Times Higher’s ranking department explains how it has attempted to overcome some of the methodological weaknesses, and that the Times Higher now depends less on reputational surveys than before, without addressing these underlying normative issues related to their use.
Serious questions have been raised about the methodological soundness of reputational surveys. First of all is the problem of respondent reliability. The extent of an academic’s knowledge about the intimate inner workings of other institutions is questionable. Although academics are asked only to comment on institutions of which they have intimate knowledge, the reality is likely to be that knowledge of institutions is likely to be uneven, and therefore unsuited to an ordinal ranking system. If academics are only permitted to evaluate institutions with which or in which they have worked, this is likely, for the vast majority of any sample, to exclude many institutions generally regarded as quality. What happens in practise then, whether or not an academic knows specific information about an institution, they evaluate according to generally accepted status.
When QS began its subject rankings in 2009, for example, Harvard gained third place in the rankings for geography, which is an impressive achievement for an institution that famously closed its geography department in 1948 with the statement that geography is not a serious subject of study’ (SMITH, 1987). Although geography covers a large interdisciplinary space, much of which does cover what Harvard does, it is doubtful that this crossover can take the place of a fully operational department. In the English Literature section of the same year, the top 20 included Peking University and the Chinese University of Hong Kong, which reflects the tendency pointed out by others to artificially promote regional Asian leaders in reputational rankings as a result of the heavy skew in statistics caused by the predominance of Asian respondents in reputational surveys, where rankings and prestige are much more central concerns to policymaking than in the West.
This is not to suggest that Peking or CUHK are not excellent universities in their own right, but having universities where English is not the first language nor the language of course delivery rated as stronger than many traditionally strong Anglophone departments seems at best surprising and counterintuitive. To compound this matter, amid widespread ridicule at the first publication, none of these anomalies appeared in any of the subsequent QS subject rankings. This suggests that either the academic community had a collective damascene moment of clarity and altered their voting habits on the basis of becoming better informed in the space of a year, or that the results were expunged from the count in the face of QS’s public embarrassment, adding fuel to the claim that commercial rankings’ main aim is to reconfirm status quo as a way of maintaining legitimacy rather than being an accurate measurement of performance (HUANG, 2012).
When Altbach’s centre-periphery conception is taken into account, the centre characterised by dense networks of actors, with a relationship to the periphery, and the periphery defined by much looser associations with one another, but stronger links with the centre we would expect that the outcome of reputational surveys in this case to promote Anglo-American institutions heavily active in the region, and a few regional leaders. They would typically not take into account other ‘peripheral’ institutions, such as non-Anglophone Europe, or Latin America. This is exactly what we see in the results of both the QS and the THE reputational rankings.
Furthermore, as a result of the volatility present in reputational rankings, it has been suggested that they present too much statistical noise to be of much use in policymaking, particularly as the methodology is tweaked or altered every year, and especially for the THE in 2009-2010, when it was changed altogether (BOOKSTEIN et al., 2010). This makes year-on-year comparison difficult to judge, as these methodological changes are presented somewhat ambiguously.
Knowledge as a Global Public Good and the Open Access Ecology
Due to the centrality of knowledge in the role of development and innovation, knowledge, and by extension research has the character of a global public good (STIGLITZ, 1999). Global public goods have two key properties; non-exclusivity and non-rivalrous consumption. Something is a global public good if its consumption does not necessarily deprive another of the opportunity to consume it, and nobody can be denied access to it. Learning and utilising a scientific theory does not deprive anyone else of the ability to also use it, and, within certain constraints (ability to understand it, access to education) nobody can be, or should be, deprived of the right to access it (UNESCO, 1998).
Non-rivalrous consumption means that there is no marginal cost associated with sharing knowledge with others, often it is beneficial to have the input of outside sources in development. As with other global public goods, the marginal costs associated with knowledge sharing consist in the cost of transmission; in teaching, and in publication, not in the sharing of the knowledge itself. From the condition of non-rivalrousness, it then must follow that nobody can be excluded from consumption of knowledge. This means that knowledge cannot be privately supplied; if knowledge is public then it is impossible to profit from it, as everyone has access to it then competition drives its price towards zero. Where knowledge has application to a practical end, and considering that there is no marginal cost implicit in knowledge sharing, this knowledge should be made as freely available as possible. This means that any unnecessary restriction on the spread and sharing of knowledge is retards the process of development and slows the spread and evolution of knowledge.
If knowledge is not a private, transactional group, we can see it as a post- capitalist good, something which feeds and lives alongside capitalist goods and innovation, but is not itself a transactional good. This means that the market absorbs and depends upon research, either directly through the intellectual property regime or indirectly by creating fertile conditions to innovate and share information. In this way we see that the relationship between the economy and knowledge are much more complex than might be commonly supposed rather than being one and the same thing, as is often supposed by neo-liberal theorists, and a good proportion of NPK theorists (BARTUNEK, 2011), who lack the ability to differentiate in the context of application between commercial and social contexts, instead seeing them as bundled. Instead, the relationship between economy and knowledge economy is mediated by society; by ideology, politics and most crucially status (HAZELKORN, 2013; MARGINSON, 2009; ORDORIKA, 2010).
In practise, this normative conception often fails to hold. Privately owned knowledge, such as that protected by intellectual property law provides, in a constrained way, the exclusive right for private individuals to profit, for a limited time, from the fruits of their research. Often military, defence and nuclear research is held from common consumption by governments as a matter of national interest, while competing laboratories and scientists may withhold information from one another, albeit usually temporarily before publication (LATOUR, 1987). Furthermore, the publishing industry as it exists today does not serve this purpose, creating artificial scarcity in a resource theoretically infinite (WELLEN, 2013; 2004). The commodification of knowledge occurring both through the growth of the private sector of higher education, and the liberalisation of the research industry in the US under Reagan means that the model perpetuated by rankings is that of the old elitist order (MARGINSON, 2009).
This is despite the incipient Open Access movement, propelled both by public funding bodies, multilateral organisations and institutions themselves. As the marginal costs of publication fall with advancing technology, eliminating both rivalrousness in consumption (my use of a pdf does not inhibit any other user from having the same file) there is both a scientific and social incentive to ensure that that access to the information produced is kept as low as possible. At present, this is not happening with the huge paywalls imposed by organisations like Thomson Reuters, who have preferred to pass the benefits on to shareholders rather than to users.
References
- ALBORNOZ, M; OSORIO, L. Uso publico de la información: el caso de los rankings de universidades Rev. Iberoam. Cienc. Tecnol. Soc., vol.12, no. 34. Ciudad Autónoma de Buenos Aires, feb. 2017.
- ALTBACH, R; SALMI, J. The Road to Academic Excellence: The Making of World- Class Research Universities. World Bank, 2011, p. 14.
- APPADURAI, A. Modernity at Large. Minnesota: University of Minnesota Press, 1996. ARCHIBUGI, D. Cosmopolitan Democracy and its Critics: A Review. European Journal of International Relations, 2004.
- AROCENA, R. ; SUTZ, J. Changing knowledge production and Latin American universities. Research Policy, v. 30, p. 1221–1234, 2001.
- AXEL-BERG, J. Competing on the World Stage: the Universidade de São Paulo and Global Universities Rankings. Thesis presented at USP, 2015.
- BALL, S. J. Privatising education, privatising education policy, privatising educational research: network governance and the “competition state”. Journal of Education Policy, v. 24, n. 1, p. 83–99, jan. 2009.
- BATY, P. “THE World University Rankings”. The Times Higher Education Supplement, 2010.
- BARTUNEK, J. M. What has happened to mode 2? British Journal of Management, 2011.
- BEUREN, G. M. Avaliação da Qualidade Institucional através de Rankings Nacionais e Internacionais. Thesis presented at UFRGS, 2014.
- BOOKSTEIN, F. L. et al. Too much noise in the Times Higher Education rankings. Scientometrics, v. 85, p. 295–299, 2010.
- BORNMANN, L. Measuring the broader impact of research: The potential of altmetrics. 27 jun. 2014.
- BOWMAN, N. A.; BASTEDO, M. N. Anchoring effects in world university rankings: Exploring biases in reputation scores. Higher Education, v. 61, p. 431–444, 2011.
- CALDERÓN, AI; PFISTER, M; FRANÇA CM. Rankings Acadêmicos na Educação Superior Brasileira: A Emergência de um Campo de Estudo (1995-2013). In Roteiro, vol. 40 no. 1, 2015.
- CAMPBELL, D. T. Assessing the Impact of Planned Social Change. Paper No. 8 Occasional Paper Series, 1976.
- CARRANZA, MP, DURLAND, J; CORENGIA, A. Exploring the causes of the invisibility’ of IR in Latin America. Paper presented at AIR 50th Annual Forum, May 29–June 2, in Chicago, IL, USA, 2010.
- CHESLOCK, J. ; KROC, R. Managing college enrollments. In Howard, McLaughlin, and Knight 2012, 221–36.
- CHENG, Y. ; LIU, N. C. A first approach to the classification of the top 500 world universities by their disciplinary characteristics using scientometrics. Scientometrics, v. 68, n. 1, p. 135–150 cheng, 2006.
- CHIRIKOV, I. Research universities as knowledge networks: the role of institutional research, Studies in Higher Education, 38:3, 456-469, 2013. DOI: 10.1080/03075079.2013.773778.
- DARAIO, G; BONACCORSI, A. Beyond university rankings? Generating new indicators on universities by linking data in open platforms. In Journal of the Association for Information Science and Technology · March 2016. Disponível em: http://onlinelibrary.wiley.com/doi/10.1002/asi.23679/abstract
- DOS SANTOS, S. M. O Desempenho Das Universidades Brasileiras Nos Rankings Internacionais Áreas De Destaque Da Produção Científica Brasileira. Thesis presented USP, 2015.
- ERNE, R. On the use and abuse of bibliometric performance indicators: a critique of Hix’s “global ranking of political science departments”. European Political Science, v. 6, n. 3, p. 306–314, 1 set. 2007.
- FRANÇA, C. M. Rankings Universitários Promovidos por Jornais no Espaço Ibero-Americano: El Mundo (Espanha), El Mercurio (Chile) e Folha De São Paulo (Brasil). Thesis Presented Puc-Campinas, 2015.
- GLÄNZEL, W. Bibliometrics as a research field. Techniques, v. 20, p. 2005, 2003. GLADWELL, M. The Order of Things. New Yorker, february 14, 2011. Disponível em: http://www.newyorker.com/magazine/2011/02/14/the-order-of-things.
- GIACALONE, R. A. Academic Rankings in Research Institutions: A Case of Skewed Mind-Sets and Professional Amnesia. Academy of Management Learning & Education, 2009.
- HALLIDAY, F. The chimera of the “International University”. International Affairs, v. 75, p. 99–120, 1999.
- HAZELKORN, E. “Problem with University Rankings”. Science and Development Network: Science and Innovation Policy Aid for Higher Education, 2009.
- HAZELKORN, E. How Rankings are Reshaping Higher Education. In: Rankings and the Reshaping of Higher Education: The Battle for World- Class Excellence. Palgrave, London, 2011. p. 1 – 8.
- HAZELKORN, E; LOUKKOLA, T; ZHANG, T. Rankings in Institutional Strategies and Processes: Impact Or Illusion? From EUA publications, 2014. Disponível em: http://www.eua.be/Libraries/publications-homepage-list/EUA_RISP_Publication.pdf?sfvrsn=6
- HABERMAS, J. The Idea of the University – Learning Processes. New German Critique, n. 41, p. 3–22, 1987.
- HUANG, M. H. Opening the black box of QS world university rankings. Research Evaluation, v. 21, p. 71–78, 2012.
- HOFFMANN, C. P.; LUTZ, C.; MECKEL, M. Impact Factor 2.0: Applying Social Network Analysis to Scientific Impact Assessment. 47th Hawaii International Conference on System Sciences. Anais.IEEE, jan. 2014.
- HOHENDAHL, P. U. Humboldt. Revisited: Liberal Education, University Reform, and the Opposition to the Neoliberal University. New German Critique, v. 38, n. 2 113, p. 59–196, 22 jul. 2011.
- LATOUR, B. Science in Action. University of Minnesota Press, 1987.
- LINKLATER, A. Cosmn citizenship; Citizenship Studies, 1998.
- LO, W. (2014) University Rankings: Implications for Higher Education in Taiwan, Springer Press, Singapore, 2014.
- LO, W. Y. W. Soft power, university rankings and knowledge production: distinctions between hegemony and self- determination in higher education. Comparative Education, v. 47, n. 2, p. 209–222, maio 2011.
- LOURENÇO, HS. Os Rankings Do Guia Do Estudante Na Educação Superior Brasileira: Um Estudo Sobre As Estratégias De Divulgação Adotadas Pelas Instituições Que Obtiveram O Prêmio Melhores Universidades. Thesis presented PUC-Campinas, 2014.
- MARCOVITCH, J. (Org.). Universidade em Movimento: Memória de uma Crise. São Paulo: Edusp, 2017.
- MARGINSON, S. National and international rankings of higher education. In: International Encyclopedia of Education, p. 546–553. 2008
- MARTIN, B. R.; IRVINE, J. Assessing basic research Some partial indicators. 1980.
- MARTINS, L. L. A Model of the Effects of Reputational Rankings on Organizational Change in Organization Science, 2005.
- MOURA, BA; MOURA, LBA. Ranqueamento de universidades: reflexões acerca da construção de reconhecimento institucional. Acta Scientiarum. Education Maringá, v. 35, n. 2, p. 213-222, July-Dec., 2013
- MOROSINI, M. Qualidade da educação superior e contextos emergentes. Avaliação (Campinas) vol.19 no.2 Sorocaba, jul. 2014. Disponível em:http://dx.doi.org/10.1590/S1414-40772014000200007
- NEVES, T; PEREIRA J; NATA, G. One-Dimensional School Rankings: a non-neutral device that conceals and naturalises inequality. In the International Journal of School Disaffection, Vol. 9, no. 1, 2012. Disponível em: http://repositorio.uportu.pt/bitstream/11328/1138/1/TNeves_OneDimensionalSchoolRankings_IJSD_Junho2012.pdf
- NOWOTNY, H. ; SCOTT, P.; GIBBONS, M. Introduction: ‘Mode 2’ Revisited: The New Production of Knowledge. Minerva, v. 41, no.3, p. 179–194, 1 set. 2003.
- OLEARY, J. ; INCE, M.; AL HILI, D.QS World University Rankings. Quacquarelli Symonds. p. 24. Disponível em: http://www.iu.qs.com/product/2010-qs-word-university-rankings-supplement/, 2010.
- ORDORIKA, I. El ranking Times en el mercado del prestigio universitario. Perfiles educativos, v. XXXII, p. 8–29, 2010.
- ORDORIKA, I. ; LLOYD, M. A Decade of International University Rankings: a Critical Perspective from Latin America. In: MAROPE, P. T.; WELLS, P.; HAZELKORN, E. (Eds.). Rankings and Accountability in Higher Education Uses and Misuses.
- RANSON, S. Public accountability in the age of neo- liberal governance. Journal of Education Policy, v. 18, n. 5, p. 459–480, out. 2003.
- RASETTI, C. P. Contra los rankings de universidades: el marketing pretencioso. In Rev. Iberoam. Cienc. Tecnol. Soc., vol.10, supl.1, Ciudad Autónoma de Buenos Aires, dic. 2015.
- RAUHVARGERS, A. Global University Rankings and Their Impact. European Universities Association Report on Rankings, 2011
- REICHARD, D. The History of Institutional Research. In The Handbook of Institutional Research Howard Eds. Wiley: San Francisco, 2012, p.3.
- RIGHETTI, S. Qual é a melhor? Origem, indicadores, limitações e impactos dos rankings universitários. Thesis presented UNICAMP, 2016.
- SAFÓN, V. What do global university rankings really measure? The search for the X factor and the X entity. Scientometrics, v. 97, p. 223–244, 2013.
- SCHWARTZMAN, S. Brazil’s leading university: between intelligentsia, world standards and social inclusion. p. 1–36, 2005.
- SMITH, N. (1987) Academic War over the field of Geography: The Elimination of Geography at Harvard 1947-1951. In Annals of the Association of American Geographers, vol.77, no.2, 1987.
- STEINER, J. E. World University Rankings – A Principal Component Analysis from Instituto de Estudos Avançados- Universidade de São Paulo, 2010. Disponível em: http://www.cs.odu.edu/~mukka/cs795sum10dm/Lecturenotes/Day4/0605252.pdf
- STIGLITZ, J. Knowledge as a Global Public Good. In: KAUL, I.; GRUNBERG, I.;
- STERN, M. (Eds.). Global Public Goods. New York: UNDP, 1999. p. 308.
- TROW, M. Problems in the transition from elite to mass higher education. 1973.
- VOLKWEIN, J. F, LIU, J; WOODELL, J. The Structure and Functions of Institutional Research Offices. In The Handbook of Institutional Research Howard Eds. Wiley: San Francisco, 2012.
- VOLKWEIN, J. F. The foundations and evolution of institutional research. In Dawn Terkla (Ed.), Institutional research: More than just data (pp. 5–20). New Directions for Higher Education, no. 141. San Francisco: Jossey-Bass, 2008.
- WELLEN, R. Open Access, Megajournals, and MOOCs: On the Political Economy of Academic Unbundling. SAGE Open, v. 3, no.4, 23 out. 2013. Disponível em: http://journals.sagepub.com/doi/abs/10.1177/2158244013507271