Page 46 - Innovations in Intelligent Machines
P. 46
34 M.L. Cummings et al.
only refining the experimental method and running more human subject trials,
which is very expensive and labor intensive.
In comparison, optimization methods such as the example presented here
provide not only predictions for operator capacity but also directly link the
capacity to a system performance measure, which was cost in our example. By
developing the estimates through the fan-out approach, there is only the con-
sideration of a vaguely defined threshold for acceptable operator performance.
Furthermore, there is no way to directly infer how this human performance
affects the overall system, which is actually the more critical variable, partic-
ularly in command and control settings. Moreover, while it was very expen-
sive in terms of experimental design for human subjects to examine mission
complexity in terms of low and high workload, in the cost-based simulation
method, mission complexity was represented by the number of targets, which
was relatively not costly to alter. Thus, this type of prediction method allows
for more specific and detailed predictions for operator capacity, as well as how
the external environment (i.e., number of targets) will affect overall mission
success.
However, while the simulation estimations provide for multivariate sensi-
tivity analysis across operator and system performance metrics, one drawback
is the inability to directly correlate the predictions to possible design inter-
ventions. As previously discussed, the cost-based simulation links the exter-
nal environment to both operator and system performance, but it inherently
lacks the ability to parse out which system parameters could and should be
changed to improve operator and autonomy performance. For example, in the
SA model, all wait times are included in a single measure, however the wait
times (interaction, queuing, and situation awareness) fundamentally have dif-
ferent causes. In addition, as demonstrated in Figure 7, the different types of
wait times can have dramatically different values and without the ability to
model and see the separate effects of different wait time sources, it is not clear
what design interventions could occur to mitigate them (such as improved
decision support or increased vehicle autonomy.)
Moreover, a cost-based simulation cannot represent the impact of specific
automation strategies on operator performance. It is often assumed that as
autonomy levels increase (as depicted in Table 1), the need for human interac-
tion decreases, and thus lowers system wait times. However, as can be seen in
Figure 15, these assumptions are not always accurate. In the experiment pre-
viously discussed, we predicted that as system autonomy increased, wait times
due to an operator workload queue (referred to as wait time in the queue in
the previous section) would decrease. However, the dotted line demonstrates
what queuing wait times were actually observed, and there was clearly an
anomaly with the active condition that corresponds to LOA 4 in Table 1.
Described more in detail in [16], what was hypothesized to be a decision
support tool to mitigate operator workload actually degraded operator per-
formance and caused increased, instead of decreased, wait times. This insight
was only gained through the experimental derived interaction, neglect, and