Page 186 - Computational Retinal Image Analysis
P. 186

3  Uncertainty and estimation  181




                  distinguishes the random estimator from an estimate the latter being a value the
                  estimator takes, T(x 1 , x 2 , …, x n ). Nonetheless, neither we nor others in the literature
                  are systematically careful in making this distinction; it is important conceptually, but
                  some sloppiness is OK as long as the reader understands what is being talked about.
                                            
                  Second, we often write est(θ) or  θ  for the value of an estimator, so we would have,
                         
                                                 
                  say, T =  θ . The latter notation, using  θ  to denote an estimate, or an estimator, is
                                                                     
                  very common in the statistical literature. Sometimes, however,  θ  refers specifically
                  to the maximum likelihood estimator (MLE).
                     The most common methods of parameter estimation are:
                  •  The method of moments uses the sample mean and variance to estimate the
                     theoretical mean and variance.
                  •  The method of maximum likelihood maximizes the likelihood function, which
                     is defined up to a multiplicative constant. A related method is method of
                     restricted likelihood.
                  •  The method of least squares finds the parameter estimate so that it minimizes
                     the sum of residual squares.
                  •  Markov Chain Monte Carlo Methods (MCMC) are computationally intensive
                     methods that give an estimate of the parameter vector as well as of its
                     multivariate distribution.
                     For the scientific inference, the parameter estimates are useless without some no-
                  tion of precision of the estimate i.e. the certainty of the estimate. One of the simplest
                  illustrations of the precision is in estimation of a normal mean. A statistical theorem
                                                                  2
                  states that if X 1 , X 2 , … , X n  is a random sample from a N(μ, σ ) distribution, with the
                  value of σ known, then the interval
                                       ( X −196· SEX X +1 96· SEX ())
                                                ()
                                                       .
                                           .
                                                   ,
                  is a 95% confidence interval (CI) for μ, where SE X () is given by SE X () = σ /  n.
                  The beauty of this confidence interval lies in the simple manipulations, that allow
                  us to reason the form of the confidence interval. We take the description of variation
                  given above and convert it to a quantitative inference about the value of the unknown
                  parameter μ. For more explanation of confidence intervals and how they relate to
                  hypothesis testing see e.g. Refs. [3, 4]. For example of statistical inference in retinal
                  imaging see e.g. Refs. [9, 10].


                  3.3  Words of caution on statistical and clinical significance
                  and multiple tests
                  When the P-value of a statistical test is lower than a predefined level of significance,
                  i.e. when P < α, we have statistical significance or we say that the result is statisti-
                  cally significant. Here we elaborate on two main issues or misconceptions of statisti-
                  cal significance: clinical significance and multiple testing problem.
                     Statistical significance alone is not a sufficient basis to interpret the findings of
                  a statistical analysis for the knowledge generation about clinical phenomena [11].
   181   182   183   184   185   186   187   188   189   190   191