Page 90 - Computational Retinal Image Analysis
P. 90

82     CHAPTER 5  Automatic landmark detection in fundus photography




                            were captured using a Topcon TRC NW6 non-mydriatic camera at 45° field of
                            view at pixel resolutions of 1440 × 960, 2240 × 1488 or 2304 × 1536 [10]. http://
                            www.adcis.net/en/Download-Third-Party/Messidor.html
                            STARE: Structured Analysis of the Retina—The STARE database (2000)
                            consists of ~400 images taken on a TopCon TRV-50 at 35° field of view. The
                            film was digitized at 605 × 700 pixels per color plane. The database contains
                            ground truth marking for the optic disc center if present and boasts a wide
                            variety of disease and image quality levels to contend with [11]. http://cecas.
                            clemson.edu/~ahoover/stare/
                            Other databases in use include the DRIONS-DB: Digital Retinal Images
                         for Optic Nerve Segmentation Database which contains 110 images and mul-
                         tiple ground truth segmentations of the optic nerve head (http://www.ia.uned.
                         es/~ejcarmona/DRIONS-DB.html) [12]. The DIARETDB0 and DIARETDB1 con-
                         tain 130 and 89 images respectively and contain mostly images with at least mild
                         DR (http://www.it.lut.fi/project/imageret/diaretdb0/) [13, 14]. The e-ophtha data-
                         base contains 463 images with DR lesions and ground truth segmentations for each
                         lesion (http://www.adcis.net/en/Download-Third-Party/E-Ophtha.html)  [15]. The
                         Kaggle DR database was made available for a DR labeling competition in 2015.
                         The large dataset from EyePACs contains images from multiple cameras at multiple
                         pixel resolutions. Over 80,000 retinal images with DR grades were made avail-
                         able to train and validate deep learning models [16] (https://www.kaggle.com/c/
                         diabetic-retinopathy-detection/data).




                         4  Algorithm accuracy
                         In order to test an algorithm’s accuracy, there must be some level of ground truth
                         available for the image. For OD and fovea detection, an ophthalmologist or experi-
                         enced image grader generally marks the center point pixels for both. These values are
                         recorded and used to compare against. In this case, results for an algorithm may be
                         stated as the distance from the ground truth pixel. Average and standard deviations
                         can then be measured for a dataset. For binary accuracy, an acceptable distance from
                         the ground truth pixel is used as a threshold. Usually 1 disc radius is used as an ac-
                         ceptable value. In other cases, the boundary of the OD may be delineated or an OD
                         mask will be available. If the detected OD is within the ground truth boundary, it is
                         considered correctly found.
                            When researchers compare their algorithm against others, they usually do so by
                         including results on one or more of the open datasets. Processing time is also consid-
                         ered along with the detection accuracy. There has generally been a tradeoff between
                         these two factors, but more recent methods have shown that both can be achieved
                         [17]. Also, older methods that self-report running time would be sped up by the pro-
                         cessing power of today’s computers.
   85   86   87   88   89   90   91   92   93   94   95