# Geometry, Invariants, and the Search for Elusive Complexity Lower Bounds

*Jan Draisma, an associate professor in the Department of Mathematics and Computer Science at Technische Universiteit Eindhoven, is chair of the SIAM Activity Group on Algebraic Geometry. To give *SIAM News* readers an idea of one current research theme in the field, he dropped in on the semester-long (fall 2014) program in algebraic geometry at the Simons Institute for the Theory of Computing.*

In a Simons Institute Open Lecture [2] that marked the beginning of the programme *Algorithms and Complexity in Algebraic Geometry*, Peter Bürgisser of TU Berlin gave an overview of recent developments in geometric complexity theory. This article is loosely based on Bürgisser’s lecture and on lectures by others in the programme’s boot camp one week earlier.

To set the stage, Bürgisser introduced three families of polynomials:

\[\begin{equation}

\mathrm{esym}_{k,n} := \sum_{1 ~\leq~ i_{1}~ < \cdots < i_{k}~ \leq~ n} X_{i_1} \cdots X_{i_k}, \\

\mathrm{det}_n := \sum_{\pi \in S_n} \mathrm{sgn}(\pi)X_{1 \pi(1)} \cdots X_{n \pi(n)~~~}, \\

\mathrm{and~perm}_n := \sum_{\pi \in S_{n}} X_{1 \pi(1)} \cdots X_{n \pi(n)~~~},

\end{equation}\]

known as the \(k\)th elementary symmetric polynomial, the determinant, and the permanent. If \(k\) is roughly \(n/2\), then these polynomials look very similar in that their degrees grow linearly in \(n\), while they have super-exponentially many terms. But how efficiently can these polynomials be evaluated at given values \(x_i\) or \(x_{ij}\) for the variables?

Using Gaussian elimination, we can evaluate \(\mathrm{det}_{n}\) in \(\mathcal{O}(n^3)\) arithmetic operations. This is not optimal––and I return to this issue below––but at least it is polynomial in \(n\). To evaluate \(\mathrm{esym}_{k,n}\), we can first evaluate the polynomial \(p_n(T) := (T + x_1) \cdots (T + x_n)\) at \(n\) values for \(T\), interpolate, and extract the coefficient of \(T^{n–k}\). Again, the complexity is \(\mathcal{O}(n^3)\), and we can do even better by using the discrete Fourier transform.

Now how about \(\mathrm{perm}_n\)? We can do better than evaluating the \(n!\) terms individually and adding them up; one way to reduce the complexity to exponential is depicted in Figure 1. But no polynomial-time algorithm is known for evaluating the permanent. Indeed, probably none exists: A theorem of Leslie Valiant states that the sequence \((\mathrm{perm}_n)_n\) is complete in the complexity class VNP [17]. This class can be thought of as an arithmetic analogue of NP, and the common belief that P \(\neq\) NP would imply that not all elements of VNP can be evaluated in polynomial time. Yet how would it be possible to prove lower bounds on the complexity of the sequence \((\mathrm{perm}_n)_n\)?

**Figure 1.**The matrix on the right is the weighted adjacency matrix of the directed graph on the left, with vertices ∅ and 123 identified, variables along arrows as indicated, and 1s along loops. Its determinant equals

*(-1)*perm

^{3-1}_{3}; one of the terms in the expansion is shown in red. This construction, from Bruno Grenet, generalises to show that the determinantal complexity of perm

_{n}is at most

*2*[7].

^{n}-1Valiant argued that a major step in this direction would be to prove that if \(\mathrm{perm}_n\) is expressed as \(\mathrm{det}_N(A)\) for some \(N \times N\) matrix \(A\) of affine-linear functions in the \(x_{ij}\), as in Figure 1, then \(N\) must grow super-polynomially in \(n\). In 2004, using geometric properties of the hypersurfaces defined by \(\mathrm{det}_N = 0\) and by \(\mathrm{perm}_n = 0\), Mignon and Ressayre proved the best-known lower bound to date on the *determinantal complexity* of the permanent: \(N \geq n^2/2\) [11].

The currently most active route toward lower bounds is the geometric complexity theory (GCT) programme [12–14]. At a basic level, this approach involves two key ideas. The first is to think of \(\mathrm{det}_N\) and \(Z^{N–n}\mathrm{perm}_n\) (the * padded permanent*), where \(Z\) is a homogenising variable that can be taken equal to \(X_{NN}\)) as points in the same vector space \(V_N\) of homogeneous polynomials of degree \(N\) in \(N^2\) variables, where the padded determinant just happens to use only \(n^2 + 1\) of the variables. The group \(\mathrm{GL}_{N^2}\) of linear transformations among the variables acts on this space, and a stronger version of Valiant’s conjecture states that if the orbit \(\mathrm{GL}_{N^2} \cdot Z^{N–n}{\mathrm{perm}_n}\) of the padded permanent lies in the topological closure of the orbit \(\mathrm{GL}_{N^2} \cdot {\mathrm{det}_N}\) , then \(N\) must be super-polynomial in \(n\). Consequently, if \(N\) is too small, then there must be a *witness polynomial function* \(F : V_N \rightarrow \mathbb{C}\) that vanishes on the orbit \(\mathrm{GL}_{N^2} \cdot {\mathrm{det}_N}\) , but not on the orbit of the padded determinant. Such a witness is a needle in a haystack that would not be found by random search.

The second key idea is to exploit the fact that the group \(\mathrm{GL}_{N^2}\) acts on the space of polynomial functions \(V_N \rightarrow \mathbb{C}\) and preserves the set of witnesses. Under this action, the space of polynomial functions decomposes into a sum of irreducible building blocks, and the search for \(F\) can be narrowed down to some of these building blocks. This leads to exciting questions in the representation theory of \(\mathrm{GL}_{N^2}\) that are currently generating a lot of research activity.

So far, this approach has not led to better lower bounds on the determinantal complexity of the permanent. But the method is universal, and it applies in particular to another notorious question in complexity theory––namely, the complexity of matrix multiplication, which governs the complexity of other linear algebra operations, such as the evaluation of \(\mathrm{det}_n\).

Volker Strassen discovered that \(2 \times 2\) matrices can be multiplied with seven instead of eight scalar multiplications, and that applying this multiplication scheme recursively decreases the complexity of \(n \times n\)-matrix multiplication from \(\mathcal{O}(n^3)\) to \(\mathcal{O}(n^{log_{2}~7)} = \mathcal{O}(n^{2.81})\) [15]. Don Coppersmith and Shmuel Winograd improved Strassen’s result to \(\mathcal{O}(n^{2.376~})\) [4]. In the last few years, several researchers have followed the Coppersmith–Winograd approach to improve the exponent [5, 6, 18]; the current record of \(\mathcal{O}(n^{2.3728639~})\) was obtained by François Le Gall. While many researchers believe that the real complexity should be \(\mathcal{O}(n^{2 + \epsilon})\) for any positive \(\epsilon\), it was recently proved by Andris Ambainis and Yuval Filmus that the Coppersmith–Winograd approach cannot possibly prove an upper bound of \(\mathcal{O}(n^{2.3078}~)\) [1].

But how could *unconditional *lower bounds on the complexity of matrix multiplication be proved? The idea is to see \(n \times n\)-*matrix multiplication* as a point \(M_n\) in a space \(V_n := \mathbb{C}^{n^2} \otimes \mathbb{C}^{n^2} \otimes \mathbb{C}^{n^2}\) of tensors, and––as in the determinant-versus-permanent question––to find a witness polynomial function \(F : V_N \rightarrow \mathcal{C}\) that gives a lower bound for the *border rank* of this tensor. Again, representation theory serves as a guide in the search for such witnesses. To illustrate, Jonathan Hauenstein, Christian Ikenmeyer, and J.M. Landsberg found a degree-20 polynomial \(F\) on the space \(V_2\) that up to scalars is preserved by the group \(\mathrm{GL}_4 \times \mathrm{GL}_4 \times \mathrm{GL}_4\) and vanishes on all tensors of border rank at most 6 but not on \(M_2\) [8]. This gives a new, computational proof that the border rank of \(M_2\) is at least 7; Strassen’s result then implies that the border rank is exactly 7. Straightforward combinatorics shows that the space of degree-20 polynomials on the 64-dimensional space \(V_2\) is C (63 + 20, 20) = 8,179,808,679,272,664,720-dimensional––it is striking how representation theory helps us to find \(F\) and evaluate it at \(M_2\)!

Bürgisser and Ikenmeyer found a sequence of explicit witnesses that show the border rank of \(M_n\) to be at least \(3n^2/2 – 2\) [3]. While this is slightly weaker than the longstanding bounds of [10, 16], it is the first non-trivial bound proved along the general lines of the GCT programme. The lower bound record, however, is currently held by Landsberg and Giorgio Ottaviani: In a 2011 paper [9], guided by algebraic geometry tailored to the border rank of \(M_n\), they found witnesses that prove a lower bound of \(2n^2 – n\) on the border rank of \(M_n\).

The geometric and representation-theoretic approach to complexity lower bounds, as Bürgisser concluded in his lecture, is starting to pay off. By attracting many of the world’s experts on geometric complexity theory to its current programme, the Simons Institute for the Theory of Computing has created ideal conditions for further progress.

Applications of algebraic geometry to complexity theory will also be one of the many topics on the programme of the upcoming SIAM Conference on Applied Algebraic Geometry (http://camp.nims.re.kr/activities/eventpages/?id=200&action=overview), August 3–7, 2015, at the National Institute for Mathematical Sciences and the Korea Advanced Institute of Science and Technology in Daejeon, South Korea.

**References**

[1] A. Ambainis and Y. Filmus, *On the Coppersmith–Winograd method*, preprint, 2014; http://www.cs.toronto.edu/~yuvalf/Limitations.pdf.

[2] P. Bürgisser, *Geometry, invariants, and the elusive search for complexity lower bounds*, Simons Institute Open Lecture, 2014; slides at http://simons.berkeley.edu/events/openlectures2014-fall-1.

[3] P. Bürgisser and C. Ikenmeyer, *Explicit lower bounds via geometric complexity theory*, Proc. Forty-Fifth Annual ACM Symposium on Theory of Computing, STOC ’13, 2013, 141–150.

[4] D. Coppersmith and S. Winograd, *Matrix multiplication via arithmetic progressions*, J. Symbolic Comput., 9:3 (1990), 251–280.

[5] A.M. Davie and A.J. Stothers, *Improved bound for complexity of matrix multiplication*, Proc. Roy. Soc. Edinburgh Sect. A, 143:2 (2013), 351–369.

[6] F. Le Gall, *Powers of tensors and fast matrix multiplication*, Proc. Thirty-Ninth International Symposium on Symbolic and Algebraic Computation (ISSAC 2014), 296–303.

[7] B. Grenet, *An upper bound for the permanent versus determinant problem*, preprint, 2012, to appear in Theory Comput.; www.lix.polytechnique.fr/~grenet/publis/Gre11.pdf.

[8] J.D. Hauenstein, C. Ikenmeyer, and J.M. Landsberg, *Equations for lower bounds on border rank,* Exp. Math., 22:4 (2013), 372–383.

[9] J.M. Landsberg and G. Ottaviani, *New lower bounds for the border rank of matrix multiplication*, preprint, 2011; arXiv:1112.6007.

[10] T. Lickteig, *A note on border rank*, Inform. Process. Lett., 18:3 (1984), 173–178.

[11] T. Mignon and N. Ressayre, *A quadratic bound for the determinant and permanent problem*, Int. Math. Res. Not. IMRN, 79 (2004), 4241–4253.

[12] K.D. Mulmuley, *On P vs. NP and geometric complexity theory*, J. ACM, 58:2 (2011), article 5.

[13] K.D. Mulmuley, *The GCT program towards the P vs. NP problem*, Comm. ACM, 6 (2012), 98–107.

[14] K.D. Mulmuley and M. Sohoni, *Geometric complexity theory, I: An approach to the P vs. NP and related problems*, SIAM J. Comput., 31:2 (2001), 496–526.

[15] V. Strassen, *Gaussian elimination is not optimal*, Numer. Math., 13 (1969), 354–356.

[16] V. Strassen, *Rank and optimal computation of generic tensors,* Linear Algebra Appl., 52/53 (1983), 645–685.

[17] L.G. Valiant, *Completeness classes in algebra*, Proc. Eleventh Annual ACM Symposium on Theory of Computing, STOC 1979, 249–261.

[18] V. Vassilevska Williams, *Multiplying matrices faster than Coppersmith–Winograd*, Proc. Forty-Fourth Annual ACM Symposium on Theory of Computing, STOC 2012, 887–898.