Page 117 - Artificial Intelligence for the Internet of Everything
P. 117

Trust and Human-Machine Teaming: A Qualitative Study  103


              mental models), the desire for interdependence, motivation toward team
              versus individual objectives, action toward team objectives, and trust among
              team members. These factors are paramount to team effectiveness, and stud-
              ies in the management domain have confirmed the importance of many of
              these team-performance characteristics (Cohen & Bailey, 1997; De Jong,
              Dirks, & Gillespie, 2016; Kozlowski & Bell, 2003; Salas, Cooke, & Rosen,
              2008). Yet, which of these factors can/should apply toward machine part-
              ners? Wynne and Lyons (2018) define autonomous agent teammate-likeness as
              “the extent to which a human operator perceives and identifies an autono-
              mous, intelligent agent partner as a highly altruistic, benevolent, interdepen-
              dent, emotive, communicative and synchronized agent teammate, rather
              than simply an instrumental tool” (p. 355). The model posited by Wynne
              and Lyons is outlined further below. It should be noted that it is believed
              to be the interactive components of the dimensions below rather than a
              single dimension alone that influence teammate-likeness perceptions as a
              whole, thus teaming perceptions may include a combination of factors.


              6.1.3 Perceived Agency
              Robotic systems that have greater decision authority—and greater capability
              to execute that decision authority—should influence teammate perceptions.
              By definition, a teammate is an autonomous entity that can contribute to the
              team’s goals, herein the notion of agency to execute those goals is key. Ima-
              gine playing soccer with a goalie who is not allowed to touch the ball. The
              goalie would probably not be viewed as a teammate since she/he couldn’t
              actually participate in the game. Machine partners absent agency are mere
              programs that should be interpreted as more tool-like. Effective agents
              should be able to observe the environment, process relevant goal-oriented
              information, and act on the environment (Chen & Barnes, 2014)—hence
              exemplifying agency. A lack of perceived agency should infer lack of auton-
              omy and increase perceptions that are tool-like versus teammate-like.

              6.1.4 Perceived Benevolence

              Like trust as discussed in the sections above, a core assumption of a teammate
              is that the teammate has one’s best interests in mind. Teammates support one
              another and provide back-up where and when needed. The same should
              hold true of machine partners. As noted above, benevolence is a core trust
              antecedent (Mayer et al., 1995) and it has been discussed as a key factor in
              driving human-robot trust (Lyons, 2013). Understanding the intent of a
   112   113   114   115   116   117   118   119   120   121   122