Page 322 -
P. 322
11.4 Security 305
down their passwords in places where they can be found. System administrators
make errors in setting up access control or configuration files and users don’t install
or use protection software. However, as I discussed in Section 10.5, we have to be
very careful when classifying a problem as a user error. Human problems often
reflect poor systems design decisions that require, for example, frequent password
changes (so that users write down their passwords) or complex configuration
mechanisms.
The controls that you might put in place to enhance system security are compara-
ble to those for reliability and safety:
1. Vulnerability avoidance Controls that are intended to ensure that attacks are
unsuccessful. The strategy here is to design the system so that security problems
are avoided. For example, sensitive military systems are not connected to public
networks so that external access is impossible. You should also think of encryp-
tion as a control based on avoidance. Any unauthorized access to encrypted data
means that it cannot be read by the attacker. In practice, it is very expensive and
time consuming to crack strong encryption.
2. Attack detection and neutralization Controls that are intended to detect and
repel attacks. These controls involve including functionality in a system that
monitors its operation and checks for unusual patterns of activity. If these are
detected, then action may be taken, such as shutting down parts of the system,
restricting access to certain users, etc.
3. Exposure limitation and recovery Controls that support recovery from prob-
lems. These can range from automated backup strategies and information
‘mirroring’ to insurance policies that cover the costs associated with a success-
ful attack on the system.
Without a reasonable level of security, we cannot be confident in a system’s
availability, reliability, and safety. Methods for certifying availability,
reliability, and security assume that the operational software is the same as the
software that was originally installed. If the system has been attacked and the
software has been compromised in some way (for example, if the software has
been modified to include a worm), then the reliability and safety arguments no
longer hold.
Errors in the development of a system can lead to security loopholes. If a sys-
tem does not respond to unexpected inputs or if array bounds are not checked,
then attackers can exploit these weaknesses to gain access to the system. Major
security incidents such as the original Internet worm (Spafford, 1989) and the
Code Red worm more than 10 years later (Berghel, 2001) took advantage of the
same vulnerability. Programs in C# do not include array bound checking, so it is
possible to overwrite part of memory with code that allows unauthorized access
to the system.