Page 120 - Advances in Biomechanics and Tissue Regeneration
P. 120

116                      7. MULTISCALE NUMERICAL SIMULATION OF HEART ELECTROPHYSIOLOGY

           high-performance computing platforms promise to deliver better performance in the PetaFLOPS range. However,
           achieving high performance on these platforms relies on the fact that strong scalability can be achieved, something
           challenging due to the performance deterioration caused by the increasing communication cost between processors
           as the number of cores increases. That is, with an increasing number of cores, the load assigned to each processor
           decreases, but the communication between different processors associated with the boundaries of a given partitioned
           domain increases. Therefore, when communication costs dominate, no further benefits are obtained from adding addi-
           tional processors. An alternative to the multicore platforms is emerging in the newer programmable graphics proces-
           sing units (GPUs), which in recent years have become highly parallel, multithreaded, many-core processors with
           tremendous computational horsepower [16, 17]. GPUs outperform multicore CPU architectures in terms of memory
           bandwidth, but underperform in terms of double precision floating point arithmetic. However, GPUs are built to
           schedule a large number of threads, thus reducing latencies in their multicore architecture.
              Sanderson et al. [18] proposed a general purpose, graphics processing unit (GP-GPU)-based approach for the solution
           of advection-reaction-diffusion models. They report an increase of performance of up to 27 times for an explicit solver
           when used on 3D problems. Regarding cardiac electrophysiology, previous studies have reported speedups by a factor of
           32 for the monodomain model [19] using an explicit finite difference scheme with a rather simple transmembrane ionic
           model. In their study, Sato et al. [19] established the solution of the partial differential equation (PDE) as the bottleneck of
           the computation with GPU. However, in their studies, older NVidia GT8800 and GT9800 GX2 cards that only supported
           single precision floating point operations were used, which greatly limited the computations of the parabolic system.
           Chai et al. [20] successfully solved a 25 million node problem on a multi-GPU architecture using the monodomain model
           and a four-state variable model. Bartocci et al. [21] have performed an implementation of a finite difference explicit solver
           for cardiac electrophysiology. They evaluated the effect of the ionic model size (number of state variables) on the per-
           formance in simulating two-dimensional (2D) tissues, and compared single precision and double precision implemen-
           tation. They provided acceleration with respect to real time. For small ionic models and the single precision
           implementation, they reported simulations faster than real time for small problems, whereas for highly detailed models
           with a larger number of state variables, they reported simulation times between 35 and 70 times larger than real time.
           Rocha et al. [22] implemented an implicit method on the GPU. Spatial discretization of the parabolic equation was per-
           formed by means of the finite element methods (FEM), keeping full stiffness matrices. Promising acceleration ratios were
           achieved with 2D bidomain tissue models using an unpreconditioned conjugate gradient (CG) method. However, with
           unstructured 3D bidomain simulations, the number of iterations required for convergence became prohibitive. In a more
           recent work, Neic et al. [23] showed that 25 processors were equivalent to a single GPU when computing the bidomain
           equations. This new capability to solve the governing equations on a relatively small GPU cluster makes it possible to
           1 day introduce simulations using patient-specific computer models into a clinical workflow. In a more recent work,
           Vigueras et al. [24] ported to the GPU a number of components of a parallel c-implemented cardiac solver. They report
           accelerations of 164 times of the ODE solver and up to 72 times for the PDE solver. They have also achieved accelerations
           of up to 44 times for the mechanics residual/Jacobian computation in electromechanical simulations.
              When dealing with the pathological heart, ventricular tachycardia and fibrillation are known to be two types of
           cardiac arrhythmias that usually take place during acute ischemia and frequently lead to sudden death [25]. Even
           though these arrhythmias arise from different conditions, ischemia is the most important perpetrator among them.
           During ischemia, the delivery of nutrients to the myocardium diminishes, causing metabolic changes that result in
           a progressive deterioration of the electric activity in the injured region [26]. These metabolic changes are mainly hyp-
                                                                   +
           oxia, increased concentrations of the extracellular potassium [K ] o (hyperkalemia), a decrease of intracellular adeno-
           sine triphosphate (ATP) (hypoxia), and acidosis [27]. From an electrophysiological point of view, these metabolic
           changes simply produce alterations in the action potential (AP), excitability, conduction velocity (CV), and effective
           refractive period (ERP), among others, creating a substrate for arrhythmias and fibrillation [26, 27]. In addition, the
           impact of ischemia in the myocardium is characterized by a high degree of heterogeneity both intramurally and trans-
           murally. In the tissue affected by acute ischemia, two zones can be distinguished: (i) the central ischemic zone corre-
           sponding to the core of the tissue suffering from the lack of blood, and (ii) a border zone (BZ) that comprises changes in
           electrophysiological properties between the healthy and ischemic regions [28, 29]. Proarrhythmic mechanisms of acute
           ischemia have been extensively investigated, although often in animal models rather than in human ventricles. Studies
           by Janse et al. [26, 30] in pig and dog hearts highlight the complexity of the proarrhythmic and spatiotemporally
           dynamic substrate in acute ischemia. Heterogeneity in excitability and repolarization properties across the BZ leads
           to the establishment of reentry around the ischemic region following ectopic excitation [26, 31]. The same studies also
           showed intramural reentry in certain cases (highlighting the potential variability in the mechanisms). However, the
           mechanisms that determine reentry formation and intramural patterns in acute ischemia in the 3D human heart remain
           unclear, due to the low resolution of intramural recordings.



                                                       I. BIOMECHANICS
   115   116   117   118   119   120   121   122   123   124   125