Page 116 - Artificial Intelligence for the Internet of Everything
P. 116
102 Artificial Intelligence for the Internet of Everything
6.1.1 Human-Machine Trust
Technology is not only extending its breadth within society, but it is also
increasing in its capacity for autonomous action. The combination of
increased decision initiative and expanded decision authority is a recipe
for potential disaster if and/or when the technology makes an error. A recent
meta-analysis found that the consequences for automation errors are most
severe when the systems exhibit the highest levels of automation
(Onnasch, Wickens, Li, & Manzey, 2014). Thus researchers have for
decades tried to understand the gamut of factors that shape trust in technol-
ogies such as automated systems (see Hoff & Bashir, 2015; Lee & See, 2004).
Trust broadly refers to the belief that a technology will help an individual
accomplish her/his goals in situations of high uncertainty and complexity
(Lee & See, 2004). Trust captures one’s willingness to be vulnerable to
another entity—this vulnerability can be directed toward other people
(Mayer, Davis, & Schoorman, 1995) or it could refer to vulnerability to
machines (Lyons & Stokes, 2012). Trust researchers have predominantly
examined these beliefs in lab contexts; however, researchers have begun
to examine trust of actual fielded systems in applied contexts (see Lyons,
Ho, et al., 2016a; Ho et al., 2017).
One of the key aspects of prior trust research is the identification of
factors that influence the trust process—herein referred to as trust anteced-
ents. The current study considers a number of trust antecedents each
having support from previous research as a trust antecedent. The trust ante-
cedents examined in this chapter include: reliability (Hancock et al., 2011),
predictability (Lee & See, 2004), helping to solve a problem (i.e., supporting
task accomplishment; Hoff & Bashir, 2015), proactively helping a person
(similar to the notion of benevolence in the interpersonal trust literature;
Ho et al., 2017), transparency of decision logic (Lyons, Koltai, et al.,
2016b; Sadler et al., 2016), transparency of intent (Lyons, 2013; Lyons,
Ho, et al., 2016a), transparency of state (Mercado et al., 2016), liking
(Merritt, 2011), familiarity (Hoff & Bashir, 2015), and social interaction
(Waytz, Heafner, & Epley, 2014).
6.1.2 Human-Machine Teaming
In addition to trust perceptions, the current chapter also examines percep-
tions of human-machine teaming. But what does it mean to be part of a
“team”? Groom and Nass (2007) outline several components of effective
teamwork, which include: shared goals, shared awareness (i.e., shared