Page 127 - Artificial Intelligence for Computational Modeling of the Heart
P. 127

Chapter 3 Learning cardiac anatomy 99




                     tinguish points on the object surface from points that are not on
                     the surface, one can estimate the K coefficients at runtime and
                     thereby obtain a surface segmentation of the object.


                     3.2.1.2 Traditional feature engineering
                        Based on this formulation, Zheng et al. [31] proposed to use the
                     Probabilistic Boosting Tree (PBT) [203] as discriminative learner.
                     In terms of feature computation, 3D Haar wavelets [259]were
                     extracted and used to encode the image information for transla-
                     tion estimation. Haar wavelets can be computed very efficiently
                     and can easily generalize to high dimensions. However, their ap-
                     plication is limited for capturing orientation and scale informa-
                     tion. The extension requires a pre-alignment of the volume and
                     the wavelet sampling pattern, which is very tedious and time-
                     consuming for a 3D learning problem.
                        A fast alternative, that comes at the expense of missing global
                     information, is the selection of local image intensity features. In
                     this context, steerable features were proposed [31]. The idea of
                     steerable features is to use a flexible sampling pattern to deter-
                     mine the image points at which local features are computed. For
                     a given hypothesis (x,y,z,φ x ,φ y ,φ z ,s x ,s y ,s z ) the sampling pat-
                     tern is centered at position (x,y,z), rotated by the correspond-
                     ing angles (φ x ,φ y ,φ z ) and anisotropically scaled with the factors
                     (s x ,s y ,s z ). Assuming that N local features are computed over a pat-
                     tern of P sampling points, the complete feature pool will contain
                     P × N features. With this strategy, Zheng et al. [31] demonstrate
                     that one can effectively capture both global and local information
                     and by steering the pattern, also incorporate orientation and scale
                     information.

                     3.2.1.3 Sparse adaptive deep neural networks
                        An alternative to handcrafted features was proposed in [258].
                     The proposed classifier is based on a modern deep neural network
                     architecture that supports the implicit learning of image features
                     for image classification directly from the raw image signal. The
                     application of standard deep neural network architectures is not
                     feasible in the volumetric setting, mainly due to the complexity
                     of the sampling operation under the considered object transfor-
                     mations. To address this challenge, Ghesu et al. [258] propose to
                     enforce sparsity in the network architecture to significantly accel-
                     erate the sampling operation. The derived architecture is called:
                     sparse adaptive deep neural networks, or SADNNs (see Fig. 3.1).
                        Given a fully-connected network architecture, the aim is to find
                     asparsitymap s for the network weights w,suchthatover T train-
   122   123   124   125   126   127   128   129   130   131   132