Page 280 - Becoming Metric Wise
P. 280

272   Becoming Metric-Wise


                This indicator shows if the research group is conservative in its
             publication attitude, in the sense of publishing in journals with a
             rather low impact (JCSm/FCSm , 1), or more daring (JCSm/
             FCSm . 1) in its submission policy.

          8.5.2 The Ratio of Averages Versus the Average of Ratios
          Problem

          A few years ago colleagues such as Lundberg (2007) and Opthof &
          Leydesdorff (2010) criticized the (meanwhile corrected) “Crown indica-
          tor” of CWTS stating that ratios of averages only make sense when applied
          on data that are normally distributed. A better approach would be to con-
          sider the average of the ratio of observed data (say number of received cita-
          tions) and an expected value (such as the average number of citations in
          the field). However, this approach does not completely solve the problem
          as one must still define ‘the field’ and the proper citation window.
             Ratios of averages and averages of ratios, in the context of impact fac-
          tors, have been introduced by Egghe and Rousseau (1996a,b), see
          Subsection 6.7.3. Clearly, the average impact factor is an average of ratios
          (AoR), while the global impact factor is a RoA. In the context of
          research evaluation an AoR is the better approach. Yet, we claim that the
          impact of a field can best be described as an RoA. We recall that when
          geometric means are used instead of arithmetic ones, the AoR verus
          RoA problem disappears.


          8.6 TOP X% PUBLICATIONS

          According to CWTS the proportion of top 10% cited publications (also
          known as the T indicator) is the best, i.e., most robust and size-
          independent, indicator for the quality of an institute’s publications. By the
          term “top 10% publications” CWTS means the top 10% most frequently
          cited among similar publications, i.e., published in the same field, the
          same publication year and being of article or review type (in the WoS).
          Dividing by 10 yields that 1.0 is the expected global norm and one can
          compare an institute’s (or research group’s) performance with this global
          norm. Concretely if 12% of an institute’s publications belong to the top
          10% of its field, then this institute is performing rather well, while if 6%
          of an institute’s publications belong to the top 10% of their field, then
          this institute is performing rather poorly. Similarly one may consider top
          5% or top 1% publications.
   275   276   277   278   279   280   281   282   283   284   285