About the Author

Computational Surgery

By Thierry Colin, Marc Garbey, and Olivier Saut

Computational surgery is a new discipline that focuses on the use of medical imaging, medical robotics, simulation, and information technology in surgery [3]. The fifth International Conference in Computational Surgery and Dual Training took place on the main campus of the National Institutes of Health, in Bethesda, Maryland, January 19–21. A particular emphasis at this year’s conference was the clinical and translational impact of multiscale methods, as promoted by the Interagency Modeling and Analysis Group.

The conference operates under a few simple rules that make it unique: It is hosted by a major medical center so as to stay close to the application field, and the slate of plenary speakers is divided equally between computational scientists (broadly defined) and surgeons; the conference program includes a panel discussion in which key players from the medical industry, academia, and federal/international agencies discuss ways to foster collaborations that will advance the field. The criterion for acceptance of a paper presentation is simple: Does it consider computational work that is translational—i.e., work that could have an impact on patient care?

More than 90 mathematicians, surgeons, oncologists, anesthesiologists, and engineers of all types have attended this meeting, which attracts more people each year. What accounts for the increasing success? It is probably a combination of the true transdisciplinary nature of the conference, the depth of the scientific program, and the involvement of surgeons.

For a long time, statistical techniques were the only mathematical tools used in the clinical framework. Statistics can provide information on populations of patients and can be used to quantify risk—and, in this sense, is essential. At the same time, the development of imaging techniques, coupled to increasingly powerful signal and image processing methods, has led to the availability of dramatic amounts of very precise data for each particular patient. Whatever the disease, moreover, patients benefit from longitudinal follow-up; the data are available at several time points and are also multimodal.

A question that arises naturally is how multimodal, longitudinal, personalized data can be used to improve diagnostic accuracy, the prognosis for patients, and management of procedures in the operating room. As presented on the computational surgery network, we believe that we can tackle these problems by integrating different facets of computational science with image processing, PDE methods, machine learning, and agent-based methods. Working with real clinical data and medical doctors is the very first step of a long and challenging endeavor that will take scientists outside their comfort zones. We describe this process with the two following examples.

Noninvasive Imaging in Modeling Tumor Growth

Mathematical models of cancer have been extensively developed with the aim of understanding and predicting tumor growth and the effects of treatment. In vivo modeling of tumors is limited by the amount of information available. In the last few years, however, we have seen dramatic increases in the range and quality of information provided by non-invasive imaging methods [1, 2]; as a result, several potentially valuable imaging measurements can now be used to determine tumor growth quantitatively, and to assess tumor status, as well as anatomical and functional details. Using different modalities, such as CT, MRI, and positron emission tomography (PET), it is possible to evaluate and define tumor status at different levels: physiological, molecular, and cellular.

In this context, the present project aims to support the decision process of oncologists as they define therapeutic protocols via quantitative methods. The idea is to build mathematically and physically sound phenomenological models that can lead to patient-specific full-scale simulations, starting with data typically collected via medical imaging technologies like CT, MRI, and PET, or by quantitative molecular biological study in the case of leukemia. Our ambition is to provide medical doctors with patient-specific tumor growth models able to estimate, on the basis of previously collected data and within the limits of phenomenological models, the evolution at subsequent times of the disease, and possibly the response to the treatment. 

The final goal is to provide numerical tools that will help clinicians answer crucial questions:

  • When to start a treatment?
  • When to change a treatment?
  • When to stop a treatment?

We also intend to incorporate real-time model information in the hope of improving the precision and effectiveness of non-invasive or micro-invasive tumor ablation techniques, such as acoustic hyperthermia, electroporation, and radiofrequency or cryo-ablation. 

We specifically focus on the following tumors:

  • Lung and liver metastases of distant tumors
  • Low-grade and high-grade gliomas, meningiomas
  • Renal cell carcinomas

Figure 1. Computation of a metastasis to the lung. The images in the top row are from simulations, those at the bottom from CT scans.
These tumors have been chosen because of existing collaborations between INRIA Bordeaux, the Institut Bergonié, and the university hospital.

Our approach is deterministic and spatial: We solve an inverse problem based on imaging data. The PDE-type models are coupled with a process of data assimilation based on imaging. The patients in our test cases have been followed at Bergonié for lung metastases of thyroid tumors. These patients have slowly evolving, asymptomatic metastatic disease that has been monitored by CT scans. On two thoracic images obtained at successive times, the volume of the tumor under investigation is extracted by segmentation. To test our method, we chose patients without treatment and for whom we had at least three successive scans. We used only the first two scans and compared our results with the third or later scans. An example is shown in Figure 1.

Interrogating a Multiscale/Multifactorial Model of Breast-Conserving Therapy

Treatment of most women with early-stage breast cancer does not require removal of the entire breast; up to 70% of affected women can be effectively and safely treated by breast-conserving therapy (BCT)—surgical removal of the tumor only (lumpectomy), followed by radiation of the remaining breast tissue. Unfortunately, the final contour and cosmesis of the treated breast is suboptimal in approximately 30% of patients.

Figure 2. Multiscale/multifactorial model of breast-conserving therapy.
The ability to accurately predict breast contour after BCT for breast cancer could significantly improve patient decision-making regarding the choice of surgery for breast cancer. Our overall hypothesis is that the complex interplay among mechanical forces—gravity, constitutive laws of breast tissue distribution, inflammation induced by radiotherapy, and internal stress generated by the healing process—plays a dominant role in determining the success or failure of lumpectomy in preserving the breast contour and cosmesis. 

As shown in our initial patient study, even in the ideal situation of excellent cosmetic outcome, this problem requires multiscale modeling [4, 5]. We propose a method for deciding which component of the model works best for each phase of healing, and what parameters should be considered dominant and patient-specific. See Figure 2.

We refer interested readers to our book series and journal to provide many more examples of the promises of computational surgery that should apply in virtually every field of surgery.

References
[1] Th. Colin, F. Cornelis, J. Jouganous, J. Palussière, and O. Saut, Patient-specific simulation of tumor growth, response to the treatment, and relapse of a lung metastasis: A clinical case, J. Comput. Surgery, 2:1 (2015).
[2] F. Cornelis, O. Saut, P. Cumsille, D. Lombardi, A. Iollo, J. Palussière, and T. Colin, In vivo mathematical modeling of tumor growth from imaging data: Soon to come in the future? Diagnostic and Interventional Imaging, 94:6 (June 2013), 571–574.
[3] M. Garbey, B. Bass, S. Berceli, C. Collet, and P. Cerveri, Computational Surgery and Dual Training: Computing, Robotics and Imaging, Springer Verlag, New York, 2014. 
[4] M. Garbey, R. Salmon, D. Thanoon, and B. Bass, Multiscale modeling and distributed computing to predict cosmesis outcome after a lumpectomy, J. Comput. Phys., 244 (2013), 321–335.
[5] R. Salmon, M. Garbey, L.W. Moore, and B.L. Bass, Interrogating a multifactorial model of breast conserving therapy with clinical data, to appear in PLOS1.


Thierry Colin and Olivier Saut are professors in the Institut de Mathématiques de Bordeaux at the Université de Bordeaux and researchers at INRIA. Marc Garbey is a professor in the department of biology and biochemistry at the University of Houston, director of research at the Methodist Institute for Technology Innovation and Education at Houston Methodist Hospital, and a professor at the Laboratoire des Sciences de l’Ingénieur pour l’Environnement of the Université de La Rochelle, France.