Page 283 - Becoming Metric Wise
P. 283

275
                                                            Research Evaluation

              lairds, those that publish mostly as first authors as workers, and those that
              contribute equally as first and as last author as traders. Of course, such a
              classification is only valid when authors are ranked in order of actual con-
              tribution, except for the last author who is the leader of the group, maybe
              only contributing by providing funds, organizing the work and taking the
              main responsibility of the published result. It goes without saying that
              such a classification is very rough and most certainly does not apply in
              fields were authors are usually placed in alphabetical order.
                 Returning to the purpose of this chapter we refer to Smolinsky and
              Lercher (2012) who performed an evaluation study in the field of mathe-
              matics. As basic truth they took received prizes and grants and compared
              the results with citation counts. They concluded that citation counts are
              misleading because of the existing variation by subdisciplines (even within
              mathematics). Hence, although citation counts are an attempt to satisfy
              the needs of nonexperts, they conclude that this attempt is not successful.
              Institutional administrators are in an external position to any discipline
              and hence need to rely on the work of experts.
                 When discussing individuals we recall the fact that not every scientist
              writes articles. Some are famous for their communication skills (Maxmen,
              2016), while others provide essential tools for their colleagues. Indeed,
              outcomes of research often include software programs written by
              scientists. These programs are of the utmost importance in fields such as
              evolutionary biology and the life sciences in general. Recognizing and
              citing software can nowadays be done via a digital object identifier
              assigned to code. Recently, platforms such as Depsy (http://depsy.org)
              have been built to track the impact of software built for academic use
              (Chawla, 2016).
                 Finally we recall that there even exists an h-index for an individual arti-
              cle A (Subsection 7.6.4), defined as the largest natural number h such that
              articles citing A, received at least h citations. This concept used citations of
              citations or a form of second-generation citations. Continuing on the topic
              of single article indicators we note that PLOS (Public Library of Science)
              provides article-level metrics, in short, ALMs (Fenner & Lin, 2014). These
              are a form of altmetrics and include download data, citation data, social
              network data, blog and media data. Fenner and Lin (who work for PLOS)
              claim that collecting these ALMs has potential for research assessment,
              research navigation and retrieval, and research monitoring and tracking,
              leading to a thorough study of the research process.
   278   279   280   281   282   283   284   285   286   287   288