Page 204 - Artificial Intelligence for Computational Modeling of the Heart
P. 204

176  Chapter 5 Machine learning methods for robust parameter estimation





                                         the best CASCADEGEP run 0.1 ± 0.2 ms for QRSd and 11.2 ± 15.8 ◦
                                         for EA, respectively. In summary, all three methods yielded com-
                                         parable performance in terms of EA error, SIMPLEGEP and the
                                         proposed method performed similarly in terms of QRSd and were
                                         outperformed in this regard by CASCADEGEP. However, consid-
                                         ering success rates (successfully personalized patients according
                                         to the defined convergence criteria), both the performance of
                                         Vito (67%) and CASCADEGEP (68%) were equivalent, while SIM-
                                         PLEGEP reached only 49% or less. In terms of run-time, i.e. aver-
                                         age number of forward model runs until convergence, Vito (31.8)
                                         almost reached the high efficiency of SIMPLEGEP (best: 20.1 iter-
                                         ations) and clearly outperformed CASCADEGEP (best: 86.6 itera-
                                         tions), which means Vito was ≈ 2.5× faster.
                                         Residual error after initialization One major advantage over
                                         standard methods is the data-driven initialization step (see sec-
                                         tion 5.3.2.1), which eliminates the need for the user to provide
                                         initial parameter values. To evaluate the utility of this step, we
                                         evaluated the forward model with the computed initialization
                                         without further personalization, then quantified the resulting er-
                                         rors and compared them against the error after initialization when
                                         fixed initial values were used (based on initialization of the best
                                         performing BOBYQA run). This was done for increasing number of
                                                                                0
                                                                                      5
                                         transition samples per dataset: n samples  = 10 ...10 .Fig. 5.7 shows
                                         that by increasing training data, both errors decreased notably. In
                                         fact, only 100 transitions per dataset suffice to become more ac-
                                         curate than the best tested fixed initial values.
                                            In summary, the proposed initialization not only simplifies the
                                         setup, as it removes the need for user-provided initialization, this
                                         experiment also showed that it can reduce initial errors by a large
                                         margin with only few training transitions available. It should be
                                         noted again that in its normal operating mode (continue person-
                                         alization after initialization), the model fit is further improved, as
                                         demonstrated in the previous experiment.
                                         Convergence analysis With the next experiment, we investigated
                                         how much training data (transition samples) were needed to
                                         achieve solid performance of the agent. To this end, we evaluated
                                         the proposed method with varying number of training transition
                                         samples per dataset and found increasing performance with in-
                                         creasing training data (Fig. 5.8), suggesting that the learning pro-
                                                                                2
                                         cess was working properly. At n samples  = 10 samples per patient,
                                         we already outperformed the best version of SIMPLEGEP (49% suc-
                                         cess rate). Starting from n samples  ≈ 3000, a plateau at ≈ 66% success
                                         rate was reached, which then remained approximately constant
                                         and almost on par with the top CASCADEGEP performance (68%
                                         success rate). Also the number of model runs until convergence
   199   200   201   202   203   204   205   206   207   208   209