Page 196 - Artificial Intelligence for Computational Modeling of the Heart
P. 196
168 Chapter 5 Machine learning methods for robust parameter estimation
Table 5.4 Mean error in ECG features using diffusivity parameters regressed from clinical
measurements.
Diffusivity QRSd [ms] α [deg]
Regression-based prediction 18.7 ± 16.2 6.5 ± 7.6
5.2.2.3 Evaluation on patient data
The method was then evaluated on the 19 patient data. Be-
cause ground truth diffusivity coefficients cannot be measured
directly in patients, we evaluated the goodness of fit of the per-
sonalization approach, namely how close were the computed
QRS duration and electrical axis after personalization from the
measurement. The regression model trained in the previous sec-
tion was used for the personalization. Table 5.4 reports the ob-
tained results. The personalization failed in 3 cases (≈ 16% of
the cases), yielding negative diffusivity coefficients. The reason
was that the measured electrical axis for these cases was outside
of the normalization range used for training, potentially due to
an atypical position of the heart within the torso. Such a situa-
tion could easily be detected and other approaches could then
be used to estimate the diffusivity parameters. The diffusivity co-
efficients estimated for the other patients were within expected
2
2
ranges (c Myo ∈[141,582] mm /s, c LV ,c RV ∈[678,2769] mm /s). As
reported in Table 5.4, the final error between measured and com-
puted QRS duration was about 18 ms and between the measured
and computed electrical axis was equal to 6.5 , both accepted clin-
◦
ically as within the noise in the measurement. Finally, Fig. 5.4
reports the overlay of simulated ECG leads on measured traces
for one representative patient, showing promising goodness of
fit.
5.3 Reinforcement learning method for model
parameter estimation
The method described in the previous sections was based on
traditional supervised learning. What if there was an algorithm
that can learn by itself how to personalize a model? To this end,
we reformulate the problem in terms of reinforcement learning
(RL) [266]. RL has its roots in control theory and neuroscience the-
ories of learning. It encompasses a set of approaches that make
an artificial agent learn from experience generated by interact-
ing with its environment. Contrary to supervised learning [331],