Page 205 - Pipeline Risk Management Manual Ideas, Techniques, and Resources
P. 205
8/182 Data Management and Analyses
segmentation. In the first, some predetermined length such as 1 information gap. Prior to calculating risk scores, it is necessary
mile or 1000 ft is chosen as the length of pipeline that will be to fill as many information gaps as possible. Otherwise, the
evaluated as a single entity. A new pipeline segment will be cre- final scores will also have gaps that will impact decision
ated at these lengths regardless of the pipeline characteristics. making.
Under this approach then, each pipeline segment will usually At every point along the pipeline, each event needs to have a
have non-uniform characteristics. For example, the pipe wall condition assigned. If data are missing, risk calculations cannot
thickness, soil type, depth of cover, and population density be completed unless some value is provided for the missing
might all change within a segment. Because the segment is to data. Defaults are the values that are to be assigned in the
be evaluated as a single entity, the non-uniformity must be absence of any other information. There are implications in the
eliminated. This is done by using the average or worst case choice of default values and an overall risk assessment default
condition within the segment. philosophy should be established.
An alternative is dynamic segmentation. This is an efficient Note that some variables cannot have a default reasonably
way of evaluating risk since it divides the pipeline into seg- assigned. An example is pipe diameter, for which any kind of
ments of similar risk characteristics-a new segment is created default would be problematic. In these cases, the data will be
when any characteristic changes. Since the risk variables meas- absent and might lead to a non-scoring segment, when risk
ure unique conditions along the pipeline they can be visualized scores are calculated.
as bands of overlapping information. Under dynamic segmen- It is useful to capture and maintain all assigned defaults in
tation, a new segment is created every time any condition one list. Defaults might need to be periodically modified.
changes, so each pipeline segment, therefore, has a set of condi- A central repository of default information makes retrieval,
tions unique from its neighbors. Section length is entirely comparison, and maintenance of default assignments easier.
dependent on how often the conltions change. The smallest Note that assignment of defaults might be governed by rules
segments are only a few feet in length when one or more vari- also. Conditional statements (“if X is true, then Y should be
ables are changing rapidly. The longest segments are several used”) are especially useful:
hundred feet or even miles long where variables are fairly
constant. If (land-use type) =“residential high” then (population density) =
“high”
Creating segments
Other special equations by which defaults will be assigned
A computer routine can replace a rather tedious manual method may also be desired. These might involve replacing a certain
of creating segments under a dynamic segmentation strategy. fixed value, converting the data type, special considerations for
Related issues such as persistence of segments and cumulative a date format, or other special assignment.
risks are also more efficiently handled with software routines.
A software program should be assessed for its handling of these
aspects. Segmentation issues are fully discussed in Chapter 2. VII. Quality assurance and quality control
Several opportunities arise to apply quality assurance and qual-
VI. Scoring ity control (QNQC) at key points in the risk assessment
process. Prior to creating segments, the following checks can
The algorithms or equations are “rules” by which risk scores be made by using queries against the event data set (or in
will be calculated from input data. Various approaches to algo- spreadsheets) as the data are collected:
rithm scoring are discussed in earlier chapters and some algo-
rithm examples are shown in Chapters 3 through 7 and also in Ensure that ail IDS are included-to make sure that the
Appendix E. The algorithm list is often best created and main- entire pipeline is included and that some portion of the
tained in a central location where relationships between equa- system(s) to be evaluated has not been unintentionally
tions can be easily seen and changes can be tracked. The rules omitted.
must often be examined and adjusted in consideration of other Ensure that only correct IDS are used-find errors and typos
rules. Ifweightings are adjusted, all weightings must be viewed in the ID field.
together. If algorithm changes are made, the central list can be Ensure that all records are within the appropriate beginning
set up to track the evolution of the algorithms over time. and ending stations for the system ID-find errors in station-
Alternate algorithms can be proposed and shown alongside ing, sometimes created when converting from field-gathered
current versions. The algorithms should be reviewed periodi- information.
cally, both as part of a performance measuring feedback loop Ensure that thesum ofalidistances (endstation - begstation)
and as an opportunity to tune the risk model for new informa- for each went does not exceed the total length of that I&
tion availability or changes in how information should be used. the sum might be less than the total length if some conditions
are to be later added as default values.
Assigning defaults Ensure that the end station of each record is exactly equal to
the beginning station of the next record-this check can also
In some cases, no information about a specific event at a spe- be done during segmentation since data gaps become appar-
cific point will be available. For example, it is not unusual to ent in that step. However, corrections will generally need to
have no confirmatory evidence regarding depth of cover in be done to the events tables so the check might be appropriate
many locations of an older pipeline. This can be seen as an here as well.