Page 233 - Computational Retinal Image Analysis
P. 233
230 CHAPTER 12 Diabetic retinopathy and maculopathy lesions
amount of data analyzed by subsequent algorithms.
4 Candidate feature extraction. Representing candidates in terms of features
reduces the data dimensionality and improves the classification performance.
5 Classification. Each object/pixel is assigned to a probability value of being a lesion.
Furthermore, depending on methodology, these techniques can be divided
into following six categories: morphology, machine learning, region growing,
thresholding, deep learning, or miscellaneous.
4.1 Morphology
These methods use morphological operations to find lesions. They are sensitive
to changes in shape and size of structuring elements which can negatively affect
detection accuracy. Baudoin et al. [22] are one the first researchers that worked
on MA detection in 1983 using fluorescein angiogram images. They employed a
mathematical morphology-based approach to remove vessels and applied a top-
hat transformation with linear structuring elements. Several methods followed this
approach; however, since intravenous use of fluorescein can cause death in 1 in
222,000 cases [21], these methods were abandoned. Walter et al. [23] also used
a top-hat-based method and automated thresholding to extract MA candidates.
They extracted 15 features and applied kernel density estimation with variable
bandwidth for MA classification. Similarly, Streeter and Cree [24] combined top-
hat transform with matched filtering to find lesion candidates. Subsequently, linear
discriminant analysis was used to produce final segmentation. Harangi et al. [25]
used morphological operators to identify exudate candidates. Next, an active contour
model was employed to find lesions’ edges. Similarly Xiaohui and Chutatape [26]
combined morphological transformations for candidate extraction with contextual
features to segment BLs.
4.2 Machine learning
Machine learning-based methods include both supervised (e.g., neural networks) and
unsupervised (e.g., clustering) learning algorithms. Niemeijer et al. [27] combined
k-nearest neighbor and linear discriminant classifiers to label each pixel as either
BL or background. Rocha et al. [28] introduced a method based on a dictionary of
visual words constructed using SIFT and SURF features. Each image was treated
as a bag of features and used as input to support vector machines (SVMs) for final
classification. Veiga et al. [29] presented an algorithm using law texture features.
SVMs were used in a cascading manner: the first SVM was used to extract MA
candidates whereas the second SVM performed final MA classification. Srivastava
et al. [30] used Frangi-based filters that were manually fine-tuned to distinguish
vessels from RLs. Filters were applied to multiple-sized image patches to extract
features. Finally, these features were classified using an SVM. Osareh et al. [31]
combined fuzzy c-means clustering and a genetic algorithm for candidate extraction