Page 207 - Privacy in a Cyber Age Policy and Practice
P. 207

NOTES  195

           16.  Institute for Statistics Education, “Glossary of Statistical Terms Test-Retest
              Reliability,” available at http://www.statistics.com.
           17.  See, for example, Oscar H. Gandy Jr., “Public Opinion Surveys and the
              Formation of Privacy Policy,”  Journal of Social Issues 59 (2003): 283–99
              (the difficulty of framing neutral questions is “especially problematic in
              the realm of privacy policy”); Susan Freiwald, “A First Principles Approach
              to Communications’ Privacy,”  Stanford Technology Law Review  (2007): 3
              (questions on privacy might be “too complicated and too easily skewed” to
              give accurate results).
           18.  Ruut Veenhoven, “Why Social Policy Needs Subjective Indicators,” Econstor,
              available at http://www.econstor.eu/handle/10419/50182.
           19.  Pew Research Center for the People and the Press, “Methodology: Question
              Wording, ” available at http://www.people-press.org/; examples of such bias
              include “social desirability bias” (“inaccurate answers to questions that deal
              with sensitive subjects” like drug use or church attendance, especially in face-
              to-face interviews), “acquiescence bias” (In a poll asking whether military
              strength was the best way to secure peace, 55 percent were in favor when it
              was phrased as a “yes or no” question, but only 33 percent were in favor when
              “diplomacy” was offered as an alternative), and question order effects. (A 2008
              poll found that an additional 10 percent of respondents expressed dissatis-
              faction with current affairs if they were previously, rather than subsequently,
              asked if they approved of the president’s performance). See also Andrew
              Binder, “Measuring Risk/Benefit Perceptions of Emerging Technologies,” Pub-
              lic Understanding of Science (2011) accessed at http://pus.sagepub.com/. (Short
              opinion polls may yield different results than longer academic surveys.)
           20.  B. J. McNeil et al., “On the Elicitation of Preferences for Alternative Therapies,”
              New England Journal of Medicine 306 (1982): 1259.
           21.  The archetypal example of survey error leading to false results was the famous
              “Literary Digest Poll” that falsely predicted Roosevelt’s loss of the 1936 presi-
              dential election results by relying solely on telephone and car owners, a dispro-
              portionately Republican group. See also “The War Over Love Heats Up Again,”
              Los Angeles Times, October 29, 1987. (A 1987 mail-in survey on love found
              that 98 percent of women were unhappy in their relationships, while a tele-
              phone poll found that 93 percent were happy—possibly because the unhappy
              had more motivation to mail in their response); Russell D. Renka, “The Good,
              the Bad, and the Ugly of Public Opinion Polls,” Southeast Missouri State Uni-
              versity. (Internet polls, which tend to attract an unrepresentative sample of the
              population and to lack safeguards against multiple voting, can be particularly
              susceptible to this type of errors.)
           22.  See, for example, Floyd J. Fowler, Jr.  Survey Research Methods, 4th ed.
              (Thousand Oaks, CA: SAGE Publications, Inc., 2009). SAGE Research
              Methods. Web.
           23.  Jason Zengerle, ‘‘The. Polls. Have. Stopped. Making. Any. Sense,” New York
              Magazine, (September 30, 2012); Thomas Fitzgerald, “Rethinking Public
              Opinion,” The New Atlantis, 21 (Summer 2008): 45–62.
   202   203   204   205   206   207   208   209   210   211   212