Page 299 - Becoming Metric Wise
P. 299

291
                                                            Research Evaluation

              8.13.7 Alberts’s Warning Against “me-too science”
              (Alberts, 2013)

              Bruce Alberts, former president of the National Academy of Sciences of
              the USA and former Editor-in-Chief of the journal Science, wrote in an
              editorial supporting the DORA declaration:
                 ... perhaps the most destructive result of any automated scoring of a research-
                 er’s quality is that it encourages “me-too science”. Any evaluation system in
                 which the mere number of a researcher’s publications increases his or her score
                 creates a strong disincentive to pursue risky and potentially groundbreaking
                 work, because it takes years to create a new approach in a new experimental
                 context, during which no publications should be expected. Such metrics further
                 block innovation because they encourage scientists to work in areas of science
                 that are already highly populated, as it is only in these fields that large numbers
                 of scientists can be expected to reference one’s work, ... only the very bravest
                 of young scientists can be expected to venture into such a poorly populated
                 research area, unless automated numerical evaluations of individuals are
                 eliminated.
                 The leaders of the scientific enterprise must accept full responsibility for thought-
                 fully analyzing the scientific contributions of other researchers. To do so in a
                 meaningful way requires the actual reading of a small selected set of each
                 researcher’s publications, a task that must not be passed by default to journal
                 editors.
                 We fully agree with these thoughtful observations and note that they
              are not in disagreement with the contents of the Leiden Manifesto (Hicks
              et al., 2015).



              8.14 CONCLUSION
              Although research evaluation should be performed by peers, bibliometric
              expertise is needed and counting is a necessity. Because of differences in
              research aims such evaluations are not context-free, reflection is needed
              and one should realize that the research environment changes because it is
              measured.
                 Perutz (2002), with Cambridge University in mind, wrote that crea-
              tivity in science, as in the arts, cannot really be organized. It arises sponta-
              neously from individual talent. Well-run laboratories can foster it, but
              bureaucrats organizing research evaluations based on self-evaluation
              reports, lots of numbers (purely number-crunching scientometrics)
              and expensive site visitations by so-called experts can kill it. All too often
              scientists complain about hierarchical organizations, inflexible, bureaucratic
   294   295   296   297   298   299   300   301   302   303   304