Page 209 - Computational Retinal Image Analysis
P. 209
1 Introduction 205
green, or blue channel of the image; r c (p) denotes the reflectance function of the
retina; L c is the illumination of the camera; and t(p) describes the portion of the light
that does not reach the camera.
This model uses a precataract clear image as a reference to estimate α. However,
such image is seldom available and the illumination light can be different if the
precataract image is captured under different conditions. Therefore, it is not likely to
have an accurate estimation of α. Meanwhile, the value of α only affects the scale of
the final image. Given this, the following simplified model is proposed:
I p() = D pt p() + L ( − tp)), (2)
1
(
()
c
c
c
where D c (p) = L c r c (p) denotes the image captured under ideal condition. The model
in Eq. (2) is similar to the dehaze model in computer vision, where the attenuation by
haze or fog is modeled by air attenuation and scattering [50]. The model is a special
case of Eq. (1) by letting α = 1. By applying the model on the retinal image, the task
of removing the clouding effect due to cataracts is converted to a common dehaze
task in computer vision.
In computer vision, many methods [50–55] have been proposed to solve the
dehaze problem in Eq. (2). Tan et al. [50] used Markov random field to enhance
the local contrast. However, this often produces over-saturated images. Fattal
[51] proposed to account for both surface shading and scene transmission, but
this solution does not perform well with heavy haze. He et al. [52] proposed a
novel dark channel prior assumption. However, the assumption is not always true
for retinal images without much shadows or complex structures. Recently, guided
image filtering (GIF) [53] was proposed recently for single image dehaze. This has
a limitation, that it does not preserve the fine structures which might be important in
retinal image analysis tasks. To overcome this limitation, we propose a new method
to preserve the structure in the original images. Motivated by GIF, we propose a
structure-preserving guided retinal image filtering (SGRIF), which is composed
of a global structure transfer filtering and a global edge-preserving smoothing.
Different from most work reported, that ends with image quality evaluation, we
also explore how the process affects the subsequent automatic analysis tasks. Two
different applications including deep learning-based optic disc segmentation and
sparse learning-based CDR computation are conducted to show the advantages of
the method.
Contributions
The main contributions are summarized as follows.
1. We give a review of existing optic disc and optic cup segmentation algorithms.
2. We introduce an SGRIF for declouding the retinal images.
3. The experimental results show that the SGRIF algorithm improves the contrast
of the image and maintains the edges for further analysis.
4. The method benefits subsequent analysis as well. It improves both accuracies in