Page 129 - Artificial Intelligence for the Internet of Everything
P. 129
Trust and Human-Machine Teaming: A Qualitative Study 115
emerged—humanness. However, additional research is needed to fully
unpack the nomological network of human-machine teaming dimensions.
The current study demonstrates that human-machine teaming is a promising
and complex phenomenon for future research and practice.
REFERENCES
Chen, J. Y. C., & Barnes, M. J. (2014). Human-agent teaming for multirobot control:
a review of the human factors issues. IEEE Transactions on Human-Machine Systems, 44
(1), 13–29.
Cohen, S. G., & Bailey, D. E. (1997). What makes teams work: group effectiveness research
from the shop floor to the executive suite. Journal of Management, 23, 239–290.
De Jong, B. A., Dirks, K. T., & Gillespie, N. (2016). Trust and team performance: a meta-
analysis of main effects, moderators, and covariates. Journal of Applied Psychology, 101,
1134–1150.
Groom, V., & Nass, C. (2007). Can robots be teammates? Benchmarks in human-robot
teams. Interaction Studies, 8, 483–500.
Hamacher, A., Bianchi-Berthouze, N., Pipe, A. G., & Eder, K. (2016). Believing in BERT:
using expressive communication to enhance trust and counteract operational error in
physical human-robot interaction. Proceedings of IEEE international symposium on robot
and human interaction communication (RO-MAN), New York: IEEE.
Hancock, P. A., Billings, D. R., Schaefer, K. E., Chen, J. Y. C., de Visser, E. J., &
Parasuraman, R. (2011). A meta-analysis of factors affecting trust in human-robot inter-
action. Human Factors, 53(5), 517–527.
Hanumantharao, S., & Grabowski, M. (2006). Effects of introducing collaborative technol-
ogy on communications in a distributed safety-critical system. International Journal of
Human-Computer Studies, 64, 714–726.
Hinds, P. J., & Mortensen, M. (2005). Understanding conflict in geographically distributed
teams: the moderating effects of shared identity, shared context, and spontaneous
communication. Organization Science, 16, 290–307.
Ho, N. T., Sadler, G. G., Hoffmann, L. C., Lyons, J. B., Fergueson, W. E., & Wilkins, M.
(2017). A longitudinal field study of auto-GCAS acceptance and trust: first year results
and implications. Journal of Cognitive Engineering and Decision Making, 11, 239–251.
Hoff, K. A., & Bashir, M. (2015). Trust in automation: integrating empirical evidence on
factors that influence trust. Human Factors, 57, 407–434.
Kozlowski, S. W. J., & Bell, B. S. (2003). Work groups and teams in organizations.
W. Borman & D. Illgen (Eds.), Handbook of psychology: Industrial and organizational psy-
chology (pp. 333–375). Vol. 12(pp. 333–375). New York, NY: John Wiley & Sons Inc.
Lasota, P. A., & Shah, J. A. (2015). Analyzing the effects of human-aware motion planning on
close-proximity human-robot collaboration. Human Factors, 57,21–33.
Lee, J. D., & See, K. A. (2004). Trust in automation: designing for appropriate reliance.
Human Factors, 46,50–80.
Leite, I., Pereira, A., Mascarenhas, S., Martinho, C., Prada, R., & Paiva, A. (2013). The influ-
ence of empathy in human-robot relations. International Journal of Human-Computer Stud-
ies, 71, 250–260.
Lyons, J. B. (2013). Being transparent about transparency: a model for human-robot inter-
action. In D. Sofge, G. J. Kruijff, & W. F. Lawless (Eds.), Trust and autonomous systems:
Papers from the AAAI spring symposium (Technical Report SS-13-07). Menlo Park, CA:
AAAI Press.