SIAM News Blog
SIAM News
Print

Estimating Patient-Specific Cardiac Conductivities: A Challenge Between Models and Data

By Alessandro Veneziani and Alessio Gizzi

“Two things fill the mind with ever new and increasing admiration and awe […] the starry heavens
above me and the moral law within me
.” – I. Kant, Critique of the Practical Reason

The precise personalization of therapies is a key factor of 21st-century medicine. Imaging and measuring devices, mathematical and numerical techniques, and new computational architectures and paradigms collectively provide an unprecedented framework for the design of therapies and devices for individual patients. As Kant noted, the combination of perception and conceptualization serves as a strong foundation for personalized medicine. 

This process has arguably advanced the most in the field of cardiology, as recent scientific advances have led to an exceptional level of maturity when simulating the complex phenomenology of heart function. The success of intraoperative therapies like pacemaking (placing stimuli to trigger the correct electromechanical synchronization) and ablation (burning portions of cardiac tissue to restore physiological conductivity) critically depends on patient-specific features. And the integration of models and data is crucial to calibrate in silico models and personalize therapies. Doing so requires the solution of inverse problems.

In a forward problem, one solves a model given some data: boundary conditions, initial conditions, and parameters. A well-posedness analysis (on the solution’s existence, uniqueness, and stability) determines that the prescribed data are the only necessary data. But in inverse problems, the roles are reversed — one aims to retrieve the model features that generated the measures with limited knowledge of the solution. Minimizing a functional—which reflects the mismatch between computed solutions and measures—is a common solution strategy; the forward problem acts as a constraint between control and state variables.

However, constrained optimization can be computationally challenging. Because the solution of these problems generally relies on iterative procedures, the computational costs of repeatedly solving initial/boundary-value differential problems may rapidly become unaffordable. Nevertheless, specific techniques for surrogating the full model can generate a breakthrough that brings the optimization approach from proof-of-concept to the medical clinic. Here, we present an example of data assimilation for the estimation of patient-specific cardiac conductivities.

Figure 1. Fluorescence optical mapping of cardiac electrical activation. 1a. Canine right ventricle visualization from top and bottom with two optical mapping cameras. 1b. Tissue is loaded with voltage dye and illuminated with green light. 1c. Example of an action potential (transmembrane voltage) wave measured with optical mapping. 1d. Optical mapping voltage signal from a single pixel. Figure courtesy of Flavio Fenton and [2].

An Inverse Conductivity Problem

The bidomain model stems from the homogenization of cardiac tissue and features two distinct regions of extra- and intracellular domain. By dropping the microscopic domain separation, one can write the following differential system:

\[\textrm{M}\partial_t\boldsymbol{u}-\triangledown \cdot (\boldsymbol{\sigma u})+\boldsymbol{G}_{ion}(\boldsymbol{u,w})=\boldsymbol{I}_e \\ \partial_t\boldsymbol{w}+g(\boldsymbol{u,w})=0.\tag1\]

Here, \(\boldsymbol{u}\) is the vector of the extra- and intracellular potentials (which are functions of space and time), \(\boldsymbol{\sigma}\) is the block diagonal matrix of conductivities, \(\boldsymbol{I}_e\) is the external stimulus, and \(\boldsymbol{w}\) is the vector of the so-called gating variables, whose dynamics are described by the nonlinear terms \(\boldsymbol{G}_{ion}(\boldsymbol{u,w})\), \(\boldsymbol{g}(\boldsymbol{u,w})\). The addition of initial and boundary conditions completes the system. While previous researchers have successfully tackled both analytical and numerical challenges related to this system [4], we consider some simplified models for inverse problems. 

The monodomain model derives from a simplification of \((1)\) with an empirical assumption on the conductivity tensor \(\boldsymbol{\sigma}\), which only allows for the formulation of a problem for the transmembrane potential (the difference between extra- and intracellular potentials). The viscous eikonal equation represents a further simplification. Though these simplified models allow for more rapid computations, they may fail to capture specific features of the actual phenomena. As usual, identifying the optimal tradeoff is not an easy task.

The optimization of a pacemaker placement or the ablation location requires the patient-specific identification of \(\boldsymbol{\sigma}\). The fine-tuning of \(\boldsymbol{\sigma}\) is critical because of the spatial variability of cardiac conductivities (on the order of \(10^2 \mu\textrm{m}\)) and their incidence on the action potential propagation across the tissue. A possible deterministic and variational approach is as follows. Find \(\boldsymbol{\sigma}\) that minimizes the functional

\[\Im(\boldsymbol{u}(\boldsymbol{\sigma}))=\frac{1}{2}\int_{\Omega_{obs}}(\boldsymbol{u}_m-\boldsymbol{u}(\boldsymbol{\sigma}))^2dx+\mathcal{R}(\boldsymbol{u}(\boldsymbol{\sigma})),\tag2\]

where \((1)\)—or its monodomain surrogate—acts as the constraint that relates \(\boldsymbol{\sigma}\) and \(\boldsymbol{u}\). Here, \(\boldsymbol{u}_m\) is the set of available measures, \(\Omega_{obs}\) is the spatial subregion of the cardiac domain \(\Omega\) where measures are available, and \(\mathcal{R}\) is a regularizing term that enforces a more convex nature to \(\Im\). We call \((2)\) an inverse conductivity problem (ICP). Many challenges arise with this kind of problem at both the theoretical and practical levels. On the one hand, one can prove the existence of a minimum [7]. On the other hand, accuracy and efficiency of the numerical solver are crucial requirements for clinical applications.

Figure 2. Electrophysiological model tuning. 2a. Action potential in time, reproduced with the Fenton-Karma (FK) and Mitchell-Schaeffer (MS) ionic models and fine-tuned upon experimental data. 2b. Convergence analysis of excitation wave conduction velocity (CV). 2c. Approximation of myocardial fiber structure within a realistic ventricular canine geometry and simulation of the excitation waves at three selected time frames. Figure adapted from [2].

Accuracy: Validation

Validating an inverse problem solution is difficult for several reasons. First, inverse problems are very sensitive to data noise (the measures) and numerical errors. Second, the data refer to observable quantities (the action potential in \(\Omega_{obs}\)) rather than the problem’s unknowns (the control variable), meaning that the quantitative assessment is indirect. Despite these challenges, Flavio H. Fenton’s group at the Georgia Institute of Technology provided data from canine right ventricles to perform the validation; the group used fluorescence imaging to retrieve the voltage maps from optical cameras (see Figure 1) [2].

The results demonstrate that our procedure retrieves the conductivity tensors with good accuracy, provided that the ionic model and regularization term are calibrated (see Figure 2). Figure 3a considers static, dynamic, and Tikhonov-like regularizations to compares different ionic models that are retrieving experimental data. The inverse problem’s performance strongly depends on tissue anisotropy modeling and electrical pacing frequency; higher frequencies correlate with lower accuracy (see Figure 3b) due to the occurrence of higher-order rhythms in both time and space [6]. However, our approach identifies a consistent spatial pattern of cardiac conductivities for advanced computational studies (see Figure 3c).

Efficiency: Model Reduction

Model reduction (MR) is the art or discipline of finding appropriate rigorous surrogate models and solutions to an expensive mathematical model. A common denominator of many MR strategies is the offline/online paradigm. For this paradigm, one performs the expensive steps independent of the data, stores the results in a library, then uses that library to efficiently solve the specific problem of interest. We summarize the rationale by defining the numerical solution of a differential problem as the linear combination of basis functions \(\varphi_i\): 

\[u=\sum\limits^N_{i=1}U_i\varphi_i.\tag3\]

The coefficients \(U_i\) must be determined.

When solving the full model (FM) with general-purpose solvers like finite elements, one selects the basis functions a priori. The coefficients capture all of the problem’s information, which provides versatility but requires a large number of coefficients \((N)\) with consequent computational costs. In MR, the basis is problem-specific and constructed during the offline phase. Since part of the problem’s information is now in \(\varphi_i\), the number \(N\) that is required for a reliable solution is generally small enough to enable (nearly) real-time computation. Identifying the right MR strategy for the ICP is both challenging and problem dependent.

Figure 3. Inverse problem solution of heterogeneous cardiac conductivities. 3a. Comparison between experimental and modeled action potential spatial maps for different pacing cycle lengths (CLs). Four different numerical settings are listed. 3b. Value of misfit \(J\) for the four numerical settings. 3c. Estimated conductivity fields (longitudinal \(\sigma_l\) and transversal \(\sigma_t\)) for the different CLs in the case of Tikhonov regularization. Figure adapted from [2].

Proper Orthogonal Decomposition

The proper orthogonal decomposition (POD) is a popular MR strategy that obtains the basis functions \(\varphi_i\) from a collection of snapshots of solutions to the forward problem for different parameters [5]. In the ICP, we compute the action potential for selected conductivity values. Since the snapshots are all solutions to the same problem, they carry information with a certain level of redundancy. The singular value decomposition (SVD) is the right filtering tool. The SVD of the snapshot matrix computes the so-called singular values (SVs), whose magnitude establishes a priority among the associated eigenvectors. By retaining the eigenvectors that are only associated with the largest SV, one can distill the important part of the snapshot matrix. If the SV exhibits a fast decay, the POD effectively yields small reduced models. But the singular values do not necessarily decay quickly in the case of ICP. Only with an accurate and empirical snapshot selection could we reduce the computational costs for ICP in a realistic scenario to around 38.8 percent of the FM solution [8]. This reduction is insufficient for clinical applications.

Proper Generalized Decomposition

The proper generalized decomposition (PGD) is a snapshot-free MR technique [3]. It is based on the idea that if one knows the solution as a function of the independent variables and all parameters, identification of the coefficients that generate a certain measure reduces to a simple root finding. In the case of ICP, explicitly computing the function \(\boldsymbol{u}(x,t,\boldsymbol{\sigma})\) reduces the identification of \(\boldsymbol{\sigma}\) that generates the measures \(\boldsymbol{u}_m(x_i,t_j)\) to a simple system:

\[\boldsymbol{u}(x_i,t_j,\boldsymbol{\sigma})-\boldsymbol{u}_m(x_i,t_j)=0.\tag4\]

Here, \(i,j\) is the range of the set of available measures. We solve this equation in real time with a least-squares approach in the online phase.

During the offline phase, we compute the action potential as a function of \(\boldsymbol{\sigma}\) by introducing an augmented variational formulation wherein the conductivities are treated as independent variables. To solve a differential problem that features many independent variables, we adopt special separation-of-variables techniques with “alternate-direction” procedures that allow low-dimensional problems to solve iteratively. Despite a substantial lack of theory, strong evidence indicates that the method converges and can eventually provide a real-time solution of the ICP [1]. Figure 4a depicts a representative example on two-dimensional realistic ventricular geometries and compares the reference solution for different tolerances of the PGD approach. Figures 4b and 4c show the results that we obtained on realistic three-dimensional ventricular geometries. We detect a very low level of error, mainly in regions with complex fiber structures. Finally, Figure 4d reports the computational times of the FM versus the PGD: we go from minutes to a fraction of a second!

Figure 4. Benchmark solution of cardiac excitation in realistic ventricular geometries. 4a. Example of finite element (FE) and proper generalized decomposition (PGD) solutions for different tolerances (color code refers to action potential value) in the same canine case of Figures 2 and 3. 4b. Three-dimensional realistic cardiac structure of a left ventricle with organization of myocardial fibers. 4c. Comparison of FE and PGD solutions for different tolerances (color code refers to action potential value) with indication of absolute error. 4d. Comparison of times for the full FE versus PGD reduced models. One online computation takes tenths of a second with PGD versus several minutes for the full model. Figure adapted from [1].

Conclusion

The intrinsic complexity of living systems relies on their multiscale nature in both space and time, and the heart is one of the most attractive systems in this landscape. In the dualist model-driven/data-driven approaches, an accurate integration of models and data is critical to successfully improve current therapies. Model-driven approaches typically demand high computational costs, yet their physical ground guarantees high accuracy even with a relatively low amount of accessible data. MR plays a critical role in translating mathematical models from proof-of-concept to the bedside. This is the second part of a revolution that began in the 20th century — the introduction of imaging in medicine, predictive nature of models, and availability of accurate data have all lead to an extreme personalization of medical treatments and devices. The mathematical challenges associated with this process will invest educational aspects and define new professional figures that convey advanced mathematics in the clinical practice, all while keeping in mind the formidable combination of concepts and data that amazes Kant and us.


Alessandro Veneziani presented this research during a minisymposium presentation at the 2021 SIAM Conference on Computational Science and Engineering, which took place virtually in March.

Acknowledgments: We thank the U.S. National Science Foundation for grants DMS-1412973/DMS-1413037 and F.H. Fenton, E. Cherry, H. Yang, A. Barone, and the Nonlinear Physics and Mathematical Models Laboratory at Università Campus Bio-Medico Rome.

References
[1] Barone, A., Carlino, M.G., Gizzi, A., Perotto, S., & Veneziani, A. (2020). Efficient estimation of cardiac conductivities: A proper generalized decomposition approach. J. Comp. Phys., 423, 109810.
[2] Barone, A., Gizzi, A., Fenton, F., Filippi, S., & Veneziani, A. (2020). Experimental validation of a variational data assimilation procedure for estimating space-dependent cardiac conductivities. Comput. Meth. Appl. Mech. Eng., 358, 112615.
[3] Chinesta, F., Keunings, R., & Leygue, A. (2013). The proper generalized decomposition for advanced numerical simulations. Basel, Switzerland: Springer International Publishing. 
[4] Colli Franzone, P., Pavarino, L.F., & Savaré, G. (2006). Computational electrocardiology: Mathematical and numerical modeling. In A. Quarteroni, L. Formaggia, & A. Veneziani (Eds.), Complex systems in biomedicine. Milano, Italy: Springer.
[5] Hesthaven, J.S., Rozza, G., & Stamm, B. (2016). Certified reduced basis methods for parametrized partial differential equations. Basel, Switzerland: Springer International Publishing.
[6] Loppini, A., Gizzi, A., Cherubini, C., Cherry, E.M., Fenton, F.H., & Filippi, S. (2019). Spatiotemporal correlation uncovers characteristic lengths in cardiac tissue. Phys. Rev. E, 100(2).
[7] Yang, H., & Veneziani, A. (2015). Estimation of cardiac conductivities in ventricular tissue by a variational approach. Inverse Prob., 31(11).
[8] Yang, H., & Veneziani, A. (2017). Efficient estimation of cardiac conductivities via POD-DEIM model order reduction. Appl. Numer. Math., 115, 180-199.

Alessandro Veneziani is a professor of mathematics and computer science at Emory University. He has been working in the field of computational mechanics with applications to blood flow and cardiac electrophysiology for almost 30 years, bringing state-of-the-art numerical methods to clinical practice. He received the SIAM Outstanding Paper Prize in 2004 and the International Giovanni Sacchi Landriani Prize for numerical analysis of partial differential equations in 2007. Alessio Gizzi is a tenure-track assistant professor of solid and structural mechanics in the Department of Engineering at the University of Rome’s Campus Bio-Medico. He is an expert in cardiac dynamics modeling and has contributed to the identification of new complex rhythms during cardiac excitation. 

blog comments powered by Disqus