Page 26 - An Introduction to Analytical Atomic Spectrometry - L. Ebdon
P. 26

Page 11

            Statistical theory tells us that, provided sufficient readings are taken, 68.3% of the actual readings lie
            within the standard deviation of the mean, and that the mean ±2s and the mean ±3s will contain 95.5%
            and 99.7% of the readings, respectively. Hence there is only a 0.3% chance that, if the readings are
            larger than the mean of the blank readings by three times the standard deviation, this is merely due to an
            unusually high blank reading. Thus, the limit of detection may be defined 'as that quantity of element
            which gives rise to a reading equal to three times the standard deviation of a series of at least 10
            determinations at or near the blank level'. This assumes a 'normal' distribution of errors, and may
            consequently result in more or less optimistic values.

            The limit of detection is a useful figure which takes into account the stability of the total instrumental
            system. It may vary from instrument to instrument and even from day to day as, for example, mains-
            borne noise varies. Thus, for atomic absorption techniques, spectroscopists often also talk about the
            characteristic concentration (often erroneously referred to as the sensitivity—erroneously as it is the
            reciprocal of the sensitivity) for 1% absorption, i.e. that concentration of the element which gives rise
            to 0.0044 absorbance units. This can easily be read off the calibration curve. The characteristic
            concentration is dependent on such factors as the atomization efficiency and flame system, and is
            independent of noise. Both this figure and the limit of detection give different, but useful, information
            about instrumental performance.

            Q. Define the limit of detection and characteristic concentration?


            Q. What information is given by the limit of detection, and how does this differ from that given by the
            characteristic concentration?


            1.5 Interferences and Errors.

            1.5.1 Interferences

            Interference is defined as an effect causing a systematic deviation in the measurement of the signal
            when a sample is nebulized, as compared with the measure that would be obtained for a solution of
            equal analyte concentration in the same solvent, but in the absence of concomitants. The interference
            may be due to a particular concomitant or to the combined effect of several concomitants. A
            concomitant causing an interference is called an interferent. Interference only causes an error if not
            adequately corrected for during an analysis. Uncorrected interferences may lead to either
            enhancements or depressions. Additionally, errors may arise in analytical methods in other ways, e.g.
            in sample pretreatment via the
   21   22   23   24   25   26   27   28   29   30   31