Page 440 -
P. 440
14.3 Human computation 431
perform honestly and accurately, users who are paid for their answers (as is often the
case) might be in it for the money. The nature of the small tasks might encourage us-
ers to emphasize speed over accuracy, rushing through tasks to collect as much pay-
ment as possible, without any regard for the quality of the answers provided (Kittur
and Kraut, 2008).
14.3.2.1 Software infrastructure
Human computation studies need not have extensive or complex software infrastruc-
ture. Studies can easily be run through homegrown or customized web applications,
together with logging software capable of tracking the details and time of any given
interaction. One productive approach for such tools might be to build a database-
driven web application capable of storing appropriate demographic background in-
formation associated with each participant, along with details of each action and task
completed. You might even add an administrative component capable of managing
and enrolling prospective participants. These homegrown applications are generally
not terribly difficult to construct, particularly if you have a web-based implementa-
tion of the key tasks under consideration, or were planning on building one anyway.
For some tasks—particularly those involving collection of fine-grained detail or re-
quiring complex interactions—the freedom associated with constructing your own
application may be necessary to get the job done.
Commercial crowdsourcing services provide an attractive alternative to homegrown
software. These commercial offerings provide platforms for creating tasks, including
providing training materials, presenting task components, and collecting task results,
with tools designed to minimize—if not eliminate—the need to do any programming.
Perhaps even more importantly, they also offer access to registered workers who have
expressed interest in completing small tasks in exchange for micropayments. This
infrastructure significantly simplifies recruitment of participants—once you publish
your tasks registered users can find them on the site and get to work. These tools can
facilitate enrolling users, managing payments, and even prescreening users to verify
eligibility in terms of demographic requirements (gender, age, etc.) or background
knowledge (Paolacci et al., 2010), thus eliminating many of the headaches of study
design. Although Amazon's Mechanical Turk (http://www.mturk.com) is by far the
crowdsourcing tool most used in published human-computer interaction studies, other
systems such as CrowdFlower also appear in the literature (Kucherbaev et al., 2016).
Although details obviously differ across platforms, construction of studies is gen-
erally straightforward. Tasks and instructions can be created via tools provided by the
sites, with custom HTML and JavaScript programming as needed, particularly for
more complex tasks. Some platforms also allow tasks that load contents of external
web sites (McInnis and Leshed, 2016), providing more control to task designers.
Software development APIs often provide additional flexibility, at the cost of some
amount of programming (Amazon, 2016; CrowdFlower, 2016).
A number of research efforts have extended the Mechanical Turk software tool-
kits to better support crowdsourced studies of web interface usability (Nebeling et al.,