Page 262 - Becoming Metric Wise
P. 262
254 Becoming Metric-Wise
The Quality of the Used Database(s)
Only a few large, international databases exist: Clarivate Analytics’ Web
of Science and Elsevier’s Scopus both belong to commercial enterprises
and are not freely available. Only large research institutes or consortia can
afford to buy them, or buy access to them. Yet, these databases are
restricted e.g., qua language and subdomains. Freely available alternatives
such as Google Scholar (GS) lack in quality or coverage.
Moreover, databases are not error-free: it is well-known that there are
problems with inconsistent and erroneous spelling of author names, non-
standardized journal names, wrong publication year, volume, pagination.
According to Maydanchik (2007) a database (any, not just bibliographic
databases) must have four basic qualities: accuracy, completeness, consis-
tency and currency, with accuracy the key quality. Here the term accu-
racy refers to the degree to which data correctly reflects the real world
object (a scientific publication in our case) or event being described. A
bibliographic database cannot possibly be complete in the sense of con-
taining a record of any document ever published, but it should be
complete, and as current as possible, with respect to the set of publica-
tions it aims to cover. Consistency refers to the requirement that each
record of the same type of publication should have the same fields,
and, of course, no two different records should refer to the same
original.
Emerging Fields
Leydesdorff (2008) warns against thoughtless use of citation indicators in
case of emerging fields. Because of their heterogeneity it is difficult to
delineate science domains. Hence, it is difficult to compare research out-
puts. This is, in particular, the case for researchers and research groups
active in emerging fields. Their results are sometimes not accepted in tra-
ditional discipline-oriented journals and even if new, domain-specific
journals exist, they are often not (yet) covered by the big databases. One
may say that citations do not capture the aspect of opening up new per-
spectives, as by their nature new fields are peripheral and lowly cited.
Even Nobel Prize winning work, such as Tu’s, may receive less citations
than those citing the original publication (Hu & Rousseau, 2016, 2017).
Under such circumstances it is near-impossible to determine if research
groups reach an international norm or benchmark.