Page 275 - Becoming Metric Wise
P. 275

267
                                                            Research Evaluation

              8.4.8 Some Further Observations on Rankings
              Some lists regularly change their methodology. For instance, they adapt
              the weights given to different indicators. In the past U.S. News and World
              Reports has often changed its methodology, leading to sudden changes in
              rankings between universities and colleges. Although there is nothing
              wrong with a change in methodology, on the condition that the new
              approach is a real improvement, it makes comparisons over time difficult
              or even impossible.
                 The phenomenon of world-wide university rankings has changed the
              political agenda of many national educational ministries and even of inter-
              national educational institutions, cf. the involvement of UNESCO in the
              Berlin Principles (Institute for Higher Education Policy, 2006).
                 Aguillo et al. (2010) have tried to compare some rankings. As the
              main rankings use different criteria they tend to differ considerably. Only
              top universities (Harvard, Stanford, Yale, Oxford, Cambridge (UK)) stay
              top universities in each ranking. However, one may say that we do not
              need dedicated rankings to know that these institutions are top universi-
              ties. For most other universities these rankings entail a large reproducibil-
              ity question.


              8.4.9 Conclusion on University Rankings
              For decades informetricians have studied journal rankings. Nowadays
              another type of rankings has come to the fore, namely (world) university
              rankings. Although such rankings may be condemned as a kind of race
              based on narrowly defined parameters, making “big” even ‘bigger,” this
              does not necessarily have to be the case. We have shown that Van Parijs
              (2009) formulated a possible purpose of such rankings. According to him
              university rankings must be redesigned so that they provide institutions
              and policy makers the incentives to honor the highest intellectual
              and social values. Unfortunately, nowadays incentives are often more
              directed to publishing often rather than to publishing well. Numbers of
              publications on their own should never be decisive in tenure decisions or
              grant submissions.
                 Recall that most of these rankings neglect educational parameters.
              One of the few attempts to include educational parameters in the univer-
              sity performance measurement (just within one university) is due to
              Degraeve et al. (1996). These authors used Data Envelopment Analysis, a
              technique that enables the incorporation of inputs and outputs of different
   270   271   272   273   274   275   276   277   278   279   280