Page 175 - Computational Retinal Image Analysis
P. 175
170 CHAPTER 9 Validation
[22] T. Stosic, B.D. Stosic, Multifractal analysis of human retinal vessels, IEEE Trans. Med.
Imaging 25 (8) (2006) 1101–1107.
[23] M.R.K. Mookiah, S. McGrory, S. Hogg, et al., Towards standardization of retinal vas-
cular measurements: on the effect of image centering, in: Computational Pathology and
Ophthalmic Medical Image Analysis, Proc MICCAI OMIA-5 Intern. Workshop, Granada,
Spain, Sep 2018, Lecture Notes in Computer Science, vol. 11039, Springer, 2018.
[24] J. Cohen, A coefficient of agreement for nominal scales, Educ. Psychol. Meas. 20 (1) (1960).
[25] J.L. Fleiss, Measuring nominal scale agreement among many raters, Psychol. Bull. 76 (5)
(1971) 378–382.
[26] J. Cohen, Weighted kappa: nominal scale agreement with provision for scaled disagree-
ment or partial credit, Psychol. Bull. 70 (4) (1968).
[27] S. Hawkins, Identification of Outliers, Monographs in Statistics an Applied Probability,
Springer, 1980.
[28] Y.-H. Kim, A.C. Kak, Error analysis of robust optical flow estimation by least median of
squares methods for the varying illumination model, IEEE Trans. Pattern Anal. Mach.
Intell. 28 (9) (2006) 1418–1435.
[29] M.A. Fischler, R.C. Bolles, Random sample consensus: a paradigm for model fitting
with applications to image analysis and automated cartography, Commun. ACM 24 (6)
(1981) 381–395.
[30] R. Raguram, M. Frahm, M. Pollefeys, A comparative analysis of RANSAC techniques
leading to random sample consensus, in: Proc. Europ. Conf. on Computer Vision
(ECCV), Part II, Lecture Notes in Computer Science, vol. 5303, Springer, 2008.
[31] T. Tommasini, A. Fusiello, E. Trucco, V. Roberto, Making good features track better, in:
Proc. IEEE Int. Conf. on Computer Vision and Pattern Recognition (CVPR), 1998.
[32] D. Freedman, P. Diaconis, On the histogram as a density estimator: L2 theory, Probab.
Theory Relat. Fields 57 (4) (1981) 453–476.
[33] S. Philip, A.D. Fleming, K.A. Goatman, et al., The efficacy of automated “disease/no
disease” grading for diabetic retinopathy in a systematic screening programme, Br. J.
Ophthalmol. 91 (2007) 1512–1517.
[34] B.H. Menze, A. Jakab, S. Bauer, et al., The multimodal brain tumor image segmentation
benchmark (BRATS), IEEE Trans. Med. Imaging 34 (10) (2015) 1993–2024.
[35] Y. Huo, Z. Xu, H. Moon, et al., SynSeg-Net: synthetic segmentation without target mo-
dality ground truth, IEEE Trans. Med. Imaging 38 (4) (2019) 1016–1025.
[36] T. Joyce, A. Chartsias, S.A. Tsaftaris, Deep multi-class segmentation without ground-
truth labels, in: Proc. Int. Conf. Medical Imaging with Deep Learning, Amsterdam, 2018.
[37] T. Kohlberger, V. Singh, C. Alvino, et al., Evaluating segmentation error without
ground truth, in: Proc. Int. Conf. on Medical Image Computing and Computer-Assisted
Intervention (MICCAI), Springer, 2012.
[38] V.V. Valindria, I. Lavdas, W. Bai, et al., Reverse classification accuracy: predicting seg-
mentation performance in the absence of ground truth, IEEE Trans. Med. Imaging 36 (8)
(2017) 1597–1606.
[39] D.P. Papadopoulos, J.R.R. Uijlings, F. Keller, et al., Extreme clicking for efficient object
annotations, in: Proc. IEEE Int. Conf. on Computer Vision (ICCV), 2017.
[40] X. Wang, Y. Peng, L. Lu, et al., TieNet: text-image embedding network for common tho-
rax disease classification and reporting in chest X-rays, in: Proc. IEEE/CVF Conference
on Computer Vision and Pattern Recognition, 2018.