SIAM News Blog
SIAM News
Print

Quantum Mechanics Without Wavefunctions

By Matthew G. Reuter and Lin-Wang Wang

Quantum mechanical effects are inherent to molecular or material systems in which electronic structure plays a critical role, and computational simulations of these systems help elucidate the properties of chemicals and materials. For example, quantum mechanics describes both physical structure (perhaps the lengths and angles of chemical bonds) and dynamics (including chemical reactions). It is unsurprising, therefore, that quantum mechanical calculations have a myriad of applications, ranging from understanding enzyme catalysis to discovering novel superconducting materials.

Computationally, these simulations aim to solve the Schrödinger wave equation for the wavefunction, the “state variable” of quantum systems [2]. Even though the Schrödinger equation is a linear, second-order PDE and is reasonably simple to write down, it has proven very difficult to solve for all but the simplest systems. Exact solutions are known for some systems with a single electron (the hydrogen atom, for instance); numerical approaches have been required for all other systems. In these cases, the key computational difficulty in solving the Schrödinger equation is the number of degrees of freedom; if the system has \(N\) electrons, the wavefunction is a function of \(3N\) variables.

Over the years, various approximations have been developed to make calculations of the wavefunction more tractable; most of them balance computational cost with physical accuracy [3]. The costliest of these, which is generally the most accurate, scales as \(\mathcal{O}(N!)\), whereas others scale polynomially (at least cubically). Computational bottlenecks in these methods include solving generalized eigenvalue problems and/or manipulating high-order tensors. Accordingly, simulations of large systems, perhaps an entire protein with thousands of atoms and even more electrons, are considered heroic and remain intractable for most approximations. Efforts to calculate larger systems have continually pushed the bounds of computational science, garnering several Gordon Bell prizes along the way [4].

The density functional theory (DFT) is an alternative to these wavefunction-based methods that reformulates quantum mechanics in terms of the electron density, \(\rho\) [8]. Unlike the wavefunction, \(\rho\) is a function of only three variables (regardless of the number of electrons), and for this reason DFT has become a route to scalable quantum mechanical methods. In the traditional Kohn–Sham DFT, \(\rho\) is constructed from single electron orbitals, which are analogous to the wavefunctions mentioned above. With only one electron, these orbitals have three degrees of freedom and thus greatly simplify the calculations. Nevertheless, the computational cost increases cubically with \(N\) because of the need to calculate and orthogonalize \(N\) orbitals. This cubic scaling has limited Kohn–Sham DFT calculations to systems with a few thousand atoms, and there has been a lasting push to develop linear scaling methods using both physical and computational approximations [1].

At CSE 2013, Emily Carter of Princeton University discussed recent work of her group to develop orbital-free DFT (OF–DFT) methods, which eschew the use of orbitals in quantum mechanical calculations. The new methods use only the electron density to perform the calculations. One of the benefits of orbitals is that they simplify the calculation of the electrons’ kinetic energy; however, there is no physical requirement to use orbitals. In principle, the kinetic energy can be written as a functional of \(\rho\), and the main challenge in advancing OF–DFT is finding suitable—that is, physically accurate and computationally feasible—forms for this functional. One example is

\[\begin{equation}
KE[\rho] = 
\int d\boldsymbol{r}_1 \int d\boldsymbol{r}_{2}\rho(\boldsymbol{r}_1)^{5/6}F(\boldsymbol{r}_1,\boldsymbol{r}_2;\rho)\rho(\boldsymbol{r}_2)^{5/6}.
\end{equation}\]

The Carter group, which has been developing these functionals for the last fifteen years [5,9], has exposed two computational hurdles in evaluating them for OF–DFT applications.

First, in the simplest approximation, \(F\) is a function of only \(|\boldsymbol{r}_1 – \boldsymbol{r}_2|\) and the functional can be viewed as a convolution integral with a nonlocal kernel \((F)\). Evaluation of the kinetic energy functional can then be carried out with Fourier transforms. Although the fast Fourier transform has a computational complexity of \(\mathcal{O}(n~ \mathrm{log}(n))\), its requirement of global all-to-all communications intrinsically limits the parallel scalability of OF–DFT methods. Fine-tuning of the FFT’s communication patterns is one way to help with this scaling; exploiting the physics of the OF–DFT application is another. Specifically, the kernel \(F(\boldsymbol{r}_1,\boldsymbol{r}_2)\) is short-range, or can, at least, be made short-range without a significant loss of accuracy. As a result, small-box local FFTs can be used instead of global FFTs.

Second, the utility of the FFT is potentially hampered by the form of \(F\). It is not guaranteed (or expected) that \(F(\boldsymbol{r}_1,\boldsymbol{r}_2)\) will be a function simply of \(|\boldsymbol{r}_1 – \boldsymbol{r}_2|\); it might also depend on \(\rho\). For example, the kernel may be \(f (k_f |\boldsymbol{r}_1 – \boldsymbol{r}_2|)\), where \(k_f\) is proportional to \([\rho(\boldsymbol{r}_1)\rho(\boldsymbol{r}_2)]^{1/6}\) or \([\rho(\boldsymbol{r}_1) + \rho(\boldsymbol{r}_2)]^{1/3}\). Due to this dependence on \(\rho\), FFTs cannot be directly applied to evaluate the functional (in general). The Carter group has overcome this problem by employing Taylor expansions of \(F\) with respect to \(\rho\) about the average value of \(\rho\). The FFT can then be used on each term in the expansion; multiple FFTs are required to evaluate the functional. Unfortunately, the Taylor expansions might break down or converge slowly, particularly in systems with large fluctuations in \(\rho\) (such as insulators). Better ways to deal with this problem would be welcome.

Figure 1. Structure of an aluminum nano-wire being stretched. OF–DFT calculations of large systems help illuminate the mechanical properties (here, sliding planes) of materials. See [7] for more details. Figure used with permission of Emily Carter.

Armed with this OF–DFT approach to quantum mechanical simulations, Carter demonstrated its applicability to numerous physical systems, including metals, insulators, and molecules. Owing to the favorable scaling of OF–DFT methods, some of the simulated systems had millions of atoms [6], showing that some large, “holy grail” systems are coming into reach (at least on supercomputers). For one example (shown in Figure 1), Carter detailed how OF–DFT calculations can probe the mechanical properties of nanomaterials under strain [7]. Extrapolating from the successes of quantum chemistry programs in the last twenty years, computational investigations of these types should help guide experimental efforts to understand and design quantum systems. Simulations of large biomolecules might lead to better pharmaceuticals, and high-throughput computations of materials (as called for in the Materials Genome Initiative) could reduce the parameter space for building better batteries. At the end of the day, the quest for bigger, faster, and more accurate quantum mechanical simulations offers great scientific advances and poses ongoing challenges to the physical, chemical, mathematical, and computational communities.

References
[1] D.R. Bowler and T. Miyazaki, \(\mathcal{O}(N)\) methods in electronic structure calculations, Rep. Prog. Phys., 75 (2012), 036503.
[2] C. Cohen-Tannoudji, B. Diu, and F. Laloë, Quantum Mechanics, John Wiley & Sons, New York, 1977.
[3] T. Helgaker, P. Jørgensen, and J. Olsen, Molecular Electronic-Structure Theory, John Wiley & Sons, West Sussex, UK, 2000.
[4] http://awards.acm.org/homepage.cfm?srt=all&awd=160, accessed April 1, 2013.
[5] C. Huang and E.A. Carter, Nonlocal orbital-free kinetic energy density functional for semiconductors, Phys. Rev. B, 81 (2010), 045206.
[6] L. Hung and E.A. Carter, Accurate simulations of metals at the mesoscale: Explicit treatment of 1 million atoms with quantum mechanics, Chem. Phys. Lett., 475 (2009), 163–170.
[7] L. Hung and E.A. Carter, Orbital-free DFT simulations of elastic response and tensile yielding of ultrathin [111] Al nanowires, J. Phys. Chem. C, 115 (2011), 6269.
[8] W. Kohn and L.J. Sham, Self-consistent equations including exchange and correlation effects, Phys. Rev., 140 (1965), A1133.
[9] Y.A. Wang, N. Govind, and E.A. Carter, Orbital-free kinetic-energy density functionals with a density-dependent kernel, Phys. Rev. B, 60 (1999), 16350.

Matthew G. Reuter is a Eugene P. Wigner Fellow in the Computer Science & Mathematics Division and Center for Nanophase Materials Sciences at Oak Ridge National Laboratory. Lin-Wang Wang is a senior staff scientist in the Materials Sciences Division at Lawrence Berkeley National Laboratory.

blog comments powered by Disqus