Page 48 -
P. 48
30 2 Process Modeling and Analysis
Operations management, and in particular operation research, is a branch of
management science heavily relying on modeling. Here a variety of mathematical
models ranging from linear programming and project planning to queueing models,
Markov chains, and simulation are used. For example, the location of a warehouse
is determined using linear programming, server capacity is added on the basis of
queueing models, and an optimal route in a container terminal is determined using
integer programming. Models are used to reason about processes (redesign) and to
make decisions inside processes (planning and control). The models used in opera-
tions management are typically tailored toward a particular analysis technique and
only used for answering a specific question. In contrast, process models in BPM typ-
ically serve multiple purposes. A process model expressed in BPMN may be used to
discuss responsibilities, analyze compliance, predict performance using simulation,
and configure a WFM system. However, BPM and operations management have
in common that making a good model is “an art rather than a science”. Creating
models is therefore a difficult and error-prone task. Typical errors include:
• The model describes an idealized version of reality. When modeling processes,
the designer tends to concentrate on the “normal” or “desirable” behavior. For
example, the model may only cover 80% of the cases assuming that these are
representative. Typically this is not the case as the other 20% may cause 80%
of the problems. The reasons for such oversimplifications are manifold. The de-
signer and management may not be aware of the many deviations that take place.
Moreover, the perception of people may be biased, depending on their role in
the organization. Hand-made models tend to be subjective, and often there is a
tendency to make things too simple just for the sake of understandability.
• Inability to adequately capture human behavior. Although simple mathematical
models may suffice to model machines or people working in an assembly line,
they are inadequate when modeling people involved in multiple processes and
exposed to multiple priorities [95, 109]. A worker who is involved in multiple
processes needs to distribute his attention over multiple processes. This makes
it difficult to model one process in isolation. Workers also do not work at con-
stant speed. A well-known illustration of this is the so-called Yerkes–Dodson law
that describes the relation between workload and performance of people [95]. In
most processes, one can easily observe that people will take more time to com-
plete a task and effectively work fewer hours per day if there is hardly any work
to do. Nevertheless, most simulation models sample service times from a fixed
probability distribution and use fixed time windows for resource availability.
• The model is at the wrong abstraction level. Depending on the input data and the
questions that need to be answered, a suitable abstraction level needs to be chosen.
The model may be too abstract and thus unable to answer relevant questions. The
model may also be too detailed, e.g., the required input cannot be obtained or
the model becomes too complex to be fully understood. Consider, for example,
a car manufacturer that has a warehouse containing thousands of spare parts. It
may be tempting to model all of them in a simulation study to compare different
inventory policies. However, if one is not aiming at making statements about a
specific spare part, this is not wise. Typically it is very time consuming to change