Page 195 - Artificial Intelligence for the Internet of Everything
P. 195

Would IOET Make Economics More Behavioral?  181


              own attention among four classes of activities: listening, storing, thinking
              and speaking. A general design principle can be put as follows: An informa-
              tion processing subsystem (a computer or new organization unit) will reduce
              the net demand on the rest of the organization’s attention only if it absorbs
              more information previously received by others than it produces—that is, if
              it listens and thinks more than it speaks.” (Ibid, p. 42; italics added). Briefly put,
              IoE has to make us feel quieter rather than noisier.
                 So far there is no clear evidence to show that our IoE environment can
              satisfy the Simon condition, and it is not entirely implausible to say that it tends
              to speak more than it can effectively listen and think (Chen, Chie, & Tai,
              2016; Chen & Venkatachalam, 2017). If so, while a myriad of interconnec-
              tions and interactions provided by IoE allow us to surf over a huge space of
              opportunities, it also exposes us to a potentially large number of decision
              problems, each with many alternatives. The latter is notoriously known
              as the choice overload problem or the paradox of choice (Iyengar & Lepper,
              2000; Schwartz, 2004). Time and attention allowed for each of these choice
              problems is, therefore, severely diluted. Under such circumstances, to facil-
              itate decision making, the attention-lacking agents may rely more on their
              fast track of information processing (i.e., the reflexive system), and less on their
              slow or deliberate track, the reflective system (Kahneman, 2011). In addition to
              emotion and gut feeling, various fast and frugal heuristics, such as following the
              herd, choice reinforcement, or using rules of thumb, will play a more con-
              tributory role in decision making (Gigerenzer & Gaissmaier, 2011), which
              may again make decision makers more like homo sapiens instead of homo
              economicus.
                 Finally, if information overload and choice overload have driven deci-
              sion makers to behave more like homo sapiens, then even though machine
              learning can effectively extract and learn the behavioral patterns of these
              decision makers, the artificial agents that have been built may be, at best,
              another homo sapiens, since what was learned by artificial agents is what they
              actually did, but not what they ought to do for the sake of their own best
              interest. If one employs these artificial agents as the incarnation of their
              human counterparts and automates the decisions for them, then the
              well-known GIGO (garbage in, garbage out) principle may be applied
              (Stephens-Davidowitz & Pabon, 2017), and the things that are in action
              are again homo sapiens and not homo economicus.
                 The above three cases, while not exhaustive, justify why Thaler’s predic-
              tion remains valid, and is independent of the IoE technology.
   190   191   192   193   194   195   196   197   198   199   200