Page 129 - Mechanical Engineers' Handbook (Volume 2)
P. 129
118 Measurements
1.3 Sensitivity and Resolution
These two terms, as applied to a measuring instrument, refer to the smallest change in the
measured quantity to which the instrument responds. Obviously the accuracy of an instrument
will depend to some extent on the sensitivity. If, for example, the sensitivity of a pressure
transducer is 1 kPa, any particular reading of the transducer has a potential error of at least
1 kPa. If the readings expected are in the range of 100 kPa and a possible error of 1% is
acceptable, then the transducer with a sensitivity of 1 kPa may be acceptable, depending
upon what other sources of error may be present in the measurement. A highly sensitive
instrument is difficult to use. Therefore a sensitivity significantly greater than that necessary
to obtain the desired accuracy is no more desirable than one with insufficient sensitivity.
Many instruments today have digital readouts. For such instruments the concepts of
sensitivity and resolution are defined somewhat differently than they are for analog-type
instruments. For example, the resolution of a digital voltmeter depends on the ‘‘bit’’ speci-
fication and the voltage range. The relationship between the two is expressed by the equation
V
R
2 n
where R resolution in volts
V voltage range
n number of bits
Thus an 8-bit instrument on a 1-V scale would have a resolution of 1/256, or 0.004, volt.
On a 10-V scale that would increase to 0.04 V. As in analog instruments, the higher the
resolution, the more difficult it is to use the instrument, so if the choice is available, one
should use the instrument which just gives the desired resolution and no more.
1.4 Linearity
The calibration curve for an instrument does not have to be a straight line. However, con-
version from a scale reading to the corresponding measured value is most convenient if it
can be done by multiplying by a constant rather than by referring to a nonlinear calibration
curve or by computing from an equation. Consequently instrument manufacturers generally
try to produce instruments with a linear readout, and the degree to which an instrument
approaches this ideal is indicated by its linearity. Several definitions of linearity are used in
instrument specification practice. The so-called independent linearity is probably the most
6
commonly used in specifications. For this definition the data for the instrument readout versus
the input are plotted and then a ‘‘best straight line’’ fit is made using the method of least
squares. Linearity is then a measure of the maximum deviation of any of the calibration
points from this straight line. This deviation can be expressed as a percentage of the actual
reading or a percentage of the full-scale reading. The latter is probably the most commonly
used, but it may make an instrument appear to be much more linear than it actually is. A
better specification is a combination of the two. Thus, linearity equals A percent of reading
or B percent of full scale, whichever is greater. Sometimes the term independent linearity
is used to describe linearity limits based on actual readings. Since both are given in terms
of a fixed percentage, an instrument with A percent proportional linearity is much more
accurate at low reading values than an instrument with A percent independent linearity.
It should be noted that although specifications may refer to an instrument as having A
percent linearity, what is really meant is A percent nonlinearity. If the linearity is specified
as independent linearity, the user of the instrument should try to minimize the error in