Page 76 - Computational Retinal Image Analysis
P. 76

5  Conclusions     67




                  points have been utilized to establish point correspondences across   cross-modal
                  images [113, 121]. Keypoint features, such as SIFT and SURF, and their descriptors
                  have also been employed for the same purpose [92, 109, 114, 116, 117, 119, 120].
                  In addition, novel descriptors associated with conventional keypoints have been pro-
                  posed [125, 125, 131].
                     In Refs. [101, 102], mutual information was proposed as a cue to the registration
                  of fundus and SLO images. The methods in Refs. [138, 142] register frontal OCT
                  scans with SLO images, in a similar fashion. As frontal images are visually similar
                  across modalities, registration is achieved by conventional matching of vessels or
                  vessel features.
                     More challenging is the registration of axial OCT scans with frontal fundus
                  images. This registration facilitates the acquisition of a volumetric reconstruction
                  of retinal tissue, below the retinal surface. The works in Refs. [147–149] utilize
                  retinal vessels as the cue to the registration of frontal fundus images and axial OCT
                  scans, requiring vessel segmentation as an initial step. However, as retinal vessel
                  segmentation is still an open problem, vessel segmentation errors are propagated in
                  the registration phase. In Ref. [150], a feature-based approach is proposed, which
                  capitalizes on corner features as control points and utilizes histograms of oriented
                  gradients to better match the neighborhoods of these points. A robust approach is
                  also included to remove correspondence outliers and estimate the transform that
                  better registers the OCT scans to the fundus image.



                  5  Conclusions

                  A wide range of methods exists for the preprocessing, enhancement, and registration
                  of retinal images acquired by conventional and emerging imaging modalities.
                     The preprocessing and enhancement tasks are required in a broad variety of uses.
                  The most straightforward is the inspection of the image by the medical professional,
                  where preprocessing and enhancement are required to preserve the fidelity of the
                  acquired image while clarifying or accenting its structure and anatomical features.
                  Preprocessing and enhancement are also utilized as initial steps in algorithmic
                  image analysis, to facilitate and increase the accuracy of detection, recognition, and
                  measurement of anatomical features. For these cases, generic image preprocessing
                  methods have been applied, though more recently, approaches carefully select and
                  tailor image preprocessing according to the subsequent image analysis goals. Due to
                  the aforementioned variety of uses, benchmarking of image preprocessing tasks has
                  been difficult and scarce. In addition, as preprocessing is typically only part of an
                  image analysis method, when evaluation is available it regards the entire method and
                  not the image preprocessing part in isolation.
                     RIR is also the basis for a wide spectrum of tasks. First, registration of multiple
                  images allows to combine them into improved or wider retinal images. Moreover,
                  RIR has been employed the comparison of retinal images, which is essential for
                  monitoring a disease and the assessment of its treatment. As image registration is a
   71   72   73   74   75   76   77   78   79   80   81