Page 292 - Becoming Metric Wise
P. 292
284 Becoming Metric-Wise
Karlsruhe, Germany) under the guidance of Hariolf Grupp, initiated more
studies and evaluations of science and technology performances.
Since those days two tendencies related to research evaluation have
developed. One is the professionalization of the field leading to expert
organizations such as CWTS, Science Metrix (Montre ´al, Canada) and
many other local ones, such as ECOOM in Flanders. The other is the
fact that Thomson Reuters and Elsevier (Scopus) provided web-based
software tools (InCites and SciVal) so that nonexperts can generate insti-
tutional metrics (and other ones). The use of such ready-made indicators
without thorough reflection on their meaning is sometimes referred to as
amateur bibliometrics. Basically, the data presented in the Web of Science
or Scopus must be seen as raw data. From these data professional experts
build their own databases, which are cleaned (most errors are removed),
in which name disambiguation has been performed and in which searches
can be performed which are not possible in the WoS or Scopus.
8.12.2 Focusing on a Small Set of Top Journals
In some fields there is a tendency to evaluate scholars exclusively by the
number of publications in a small set of top or premier journals. In our
opinion this is a practice that should be banned. Indeed, not all influential
articles are published in this limited set of journals and not all articles pub-
lished in a top journal are top articles. Moreover, this practice discourages
publishing in journals outside the field. Using informetric indicators shifts
the focus from the venue of publication to the use of publications. Note
that the Nature Index uses a similarly problematic system.
It is essential that, if tenure or evaluation committees want to consider
all publications, they dispose of a complete list of the candidates’ publica-
tions, preferably provided by the candidate, and checked by the commit-
tee (the candidate might omit articles that were later shown to contain
errors).
8.12.3 Lists and Some Reflections on the Compilation and
Use of Lists
Continuing the previous section we note that, when evaluating scientific
performance in disciplines or languages that are not adequately repre-
sented in the major international databases, one tends to draw lists of
“best” journals. The field of management is an example of a field that
often uses such lists (Xu et al., 2015). Using such lists is not necessarily a