Page 452 -
P. 452

References  443




                    custom apps? Are there additional ethical dilemmas associated with the
                    combination of these various data types?

                  3.  Many crowdsourcing user studies might be seen as generalizations of remote
                    usability studies, conducted through a software platform built to support
                    participant recruitment. However, incentives might differ: in traditional lab
                    studies, participants might be offered some money or a gift for participating, but
                    crowdsourcing workers are generally paid by the task. Does this approach raise
                    any concerns regarding the ethical treatment of research participants?



                  RESEARCH DESIGN EXERCISE

                  The combination of human computation and ubiquitous computing raises some
                  interesting and challenging opportunities for HCI research. Imagine a novel
                  application of the intersection of these techniques designed to help with a
                  distinctly ancient and noncomputerized human activity: gardening. Specifically,
                  a gardening support network might use online fora (or is it flora?) for members
                  to exchange information and tips about cultivation of various plants in different
                  climes. Participants might use ubiquitous computing tools to capture photos of
                  plants, to measure activity in watering, and to track time spent working on the
                  garden. Finally, human computation elements might be used to verify the identity
                  of unfamiliar plants or blights or other infections that might harm plants: images
                  collected from an individual's garden might be sent to a community of workers
                  who might theorize about the identity of the plant in question, with a majority
                  vote summarizing the consensus of the community. Speculate as to how you might
                  go about constructing and studying this complex ecosystem. What design issues
                  and challenges do you see? How might issues such as differing levels of expertise
                  and experience be accounted for in the design? How might users distinguish
                  between good advice and bad? How can users be enticed to participate in the
                  interpretation of provided images? How might you evaluate the success of the
                  various components of this system?



                    REFERENCES
                  Abdul-Rahman, A., Proctor, K.J., Duffy, B., Chen, M., 2014. Repeated measures design in
                    crowdsourcing-based experiments for visualization. In: Proceedings of the Fifth Workshop
                    on Beyond Time and Errors: Novel Evaluation Methods for Visualization. ACM, Paris, pp.
                    95–102.
                  Ahn, L.v., Dabbish, L., 2004. Labeling images with a computer game. In: Proceedings of
                    the SIGCHI Conference on Human Factors in Computing Systems. ACM, Vienna, pp.
                    319–326.
                  Ahn, L.v., Dabbish, L., 2008. Designing games with a purpose. Communications of the ACM
                    51 (8), 58–67.
   447   448   449   450   451   452   453   454   455   456   457