Page 264 - Becoming Metric Wise
P. 264

256   Becoming Metric-Wise


                 Metrics should be transparent; the construction of the data should
              follow a clearly stated set of rules. Everyone should have access to
              the data.
           5. Allow those evaluated to verify data and analysis.
                 Data should be verified by those evaluated, who should be
              offered the opportunity to provide corrections and contribute
              explanatory notes if they wish. It is easy to underestimate the diffi-
              culty of constructing accurate data. Evaluators must spend time and
              money to produce data of high quality. Those mandating the use of
              metrics should provide assurance that the data are accurate and hence
              budget for it.
           6. Account for variation by field in publication and citation practices.
                 Sensitivity to field differences is important. Values of metrics
              differ by field and hence their interpretation must be adapted to
              the corresponding field or even subfield (Smolinsky & Lercher,
              2012). Old and new fields may differ in growth rates, degree of
              interdisciplinarity and resources that are needed as inputs. This may
              affect the performance of scientists and the way in which scientists
              are best assessed. One way to take this aspect into account is by
              normalizing data based on variation in citation and publication
              rates by field and over time. Humanists will not be able to use cita-
              tion counts; computer scientists will need to ensure conference
              papers are included. The state-of-the-art is to select a suite of pos-
              sible indicators and allow fields to choose among them. Similarly, a
              disaggregated approach to research evaluation is always preferred to
              an aggregated one. This implies that research evaluation instru-
              ments should discard as little information (by not aggregating indi-
              cators or data) as possible. Even then, interdisciplinary research
              offers another challenge.
           7. Base assessment of individual researchers on a qualitative judgement
              of their portfolio.
                 In other words: standard metrics have no bearing on individuals.
              We note that the h-index is invented for use on individuals.
              Following the Leiden Manifesto to the letter would exclude such use.
           8. Avoid misplaced concreteness and false precision.
                 Providing journal impact factors with three decimals is a typical
              case of false precision.
           9. Recognize the systemic effects of assessment and indicators.
   259   260   261   262   263   264   265   266   267   268   269