Page 281 -
P. 281

270    CHAPTER 10  Usability testing




                         content were written based on the WCAG. The Web Accessibility Initiative also has
                         guidelines related to authoring tool accessibility, user agent accessibility, and rich
                         Internet application accessibility. These guidelines, while being commonly used, can
                         be overwhelming in scope and so the Web Accessibility Initiative also offers shorter
                         versions of the guidelines documents (such as checkpoints and quick tips) which can
                         be considered as heuristics. Other commonly used guidelines include the operat-
                         ing systems interface guidelines documents from Apple and Microsoft, the research-
                         based web design and usability guidelines from the US government and the KDE or
                         GNOME interface guidelines. In addition, firms such as the Nielsen Norman Group
                         have large numbers of specialized guideline sets that are available for a price.
                            Other types of expert review, such as the formal usability inspection and the plu-
                         ralistic walkthrough, are not as common (Hollingsed and Novick, 2007). If you are
                         interested in different types of expert review, you should read the classic book on
                         expert reviews (Nielsen and Mack, 1994) or recent HCI papers about expert review
                         methods. However, since expert-based reviews really don’t involve users, we won’t
                         go into any more details on this topic.


                         10.4.2   AUTOMATED USABILITY TESTING
                         An automated usability test is a software application that inspects a series of in-
                         terfaces to assess the level of usability. Often, this works by using a set of inter-
                         face guidelines (described in Section 10.4.1) and having the software compare the
                         guidelines to the interfaces. A summary report is then provided by the automated
                         usability testing application. Automated usability testing applications are often used
                         when a large number of interfaces need to be examined and little time is available
                         to do human-based reviews. The major strength is that these applications can read
                         through code very quickly, looking for usability problems that can be picked up.
                         These applications typically have features to either offer advice about how the code
                         should be fixed or actually fix the code. However, the major weakness is that many
                         aspects of usability cannot be discovered by automated means, such as appropriate
                         wording, labels, and layout. And most automated tools are designed only to test web
                         interfaces. For instance, an application can determine if a web page has alternative
                         code for a graphic (important for accessibility, and a requirement under the WCAG
                         2.0), by examining to determine the existence of an <alt> attribute in an <img> tag.
                         However, an application cannot determine if that alternative text is clear and useful
                         (e.g. “picture here” would not be an appropriate text but it would meet the require-
                         ments of the automated usability testing application). In many situations like that,
                         manual checks are required. A manual check is when one of these applications notes
                         that because of the presence of certain interface features, a human inspection is re-
                         quired to determine if a guideline is complied with (e.g. if a form has proper labels).
                            Automated usability testing applications are good at measuring certain statistics,
                         such as the number of fonts used, the average font size, the average size of click-
                         able buttons, the deepest level of menus, and the average loading time of graph-
                         ics (Au et al., 2008). These are useful metrics, but they do not ascertain how users
   276   277   278   279   280   281   282   283   284   285   286