Page 121 - Artificial Intelligence for the Internet of Everything
P. 121
Trust and Human-Machine Teaming: A Qualitative Study 107
participants and the coding team met to discuss their ratings. The next 30
participants were coded together as a team and consensus coding was used
for the first set of participants (100 in total). For the remaining 505 responses,
two raters coded for trust and two raters coded for teaming. The two rater
pairs evidenced 90% agreement or higher for both sets of data. Approxi-
mately 5% of the data was not usable for the teaming item due to participants
saying things like “there is no way a machine can be a teammate.” The items
were coded for the following dimensions.
6.2.4 Trust
The following trust antecedents were coded: reliability, predictability, helps
solve a problem, proactively helped, evidenced transparency logic, evi-
denced transparent intent, evidenced transparent state, liking, familiarity,
and social interaction. Examples of each rating category are below. It should
be noted that the concepts of reliability and predictability are similar, and
they were often both rated as present in the open-ended items. However,
they were distinguished in the present study as reliability being more about
“doing a task effectively or with high performance” whereas predictability
was more about stability of some behavior over time (for instance, always
failing at a certain task would be an example of predictability but not reli-
ability, or always responding a certain way independent of performance).
They were often used in conjunction with one another (e.g., “consistently
works well”); however, not always. The “help solve a problem” and “pro-
actively helps me” codes were distinguishable in the sense that in the latter,
the technology actively supports the user without the user’s constant inputs
and monitoring.
Reliability is the main thing. Once I get it set up I need to know it will stay working
(reliability)
Dependability and consistency … (predictability)
This technology allows for an easier experience with less physical stress (helped
solve a problem)
I gain trust in the device(s) as they show that they are becoming able to predict my
preferences and behaviors from limited data points and when they demonstrate
they can self-correct when their predictions are wrong—it makes me see them
more like smart devices, like robots, than devices that simply react at a certain time,
like an alarm clock (proactively helped me)
When it is pulling updated files from publishers nightly … (transparency—logic)