SIAM News Blog
SIAM News

# Control, Intuition, Existence, and Regularity

Figure 1. A soap film spanned by two concentric rings.
Two concentric rings are dipped in a soapy solution so as to form a surface of revolution (see Figure 1). What is the profile $$x(\cdot)$$ of the resulting soap film? This famous minimal surface problem amounts to identifying the function $$x(t)$$ that minimizes the integral functional

$x(\cdot)\mapsto \int^{b}_{a} x(t) \sqrt{1 + x\prime(t)^2} dt$

under the constraint that $$x$$ have prescribed values at $$a$$ and $$b$$. Leonhard Euler solved this instance of the basic problem in the calculus of variations in 1744, finding that the (physically observed) curve is a catenary. This is a smooth function, and the soap film visibly exists; thus, no issues of regularity or existence would seem to arise at all.

But what happens if we gradually increase the distance between the two rings? At some point, our physical intuition tells us, the bubble pops. Why so? Has the solution to the problem simply ceased to exist?

The reality is more complex. Almost a century later, Goldschmidt explained (in 1831) that, in fact, the minimal surface has folded onto the rings and has become the union of two disks – that is, the surface of revolution generated by the “broken curve” shown in Figure 2, one that has two “corners” (points of nondifferentiability). To our knowledge, this heralds the first appearance in analysis of a nonsmooth function (and irreversible dynamics).

Figure 2. Does the soap film disappear?

There is no element of “control” in the problem above; that is, there is no structure whereby the state $$x(\cdot)$$ corresponds to the choice of a control function $$u(\cdot)$$ via certain dynamics, such as that of a standard control system

$x^{\prime}(t) = f(x(t), u(t)) \mathrm{a.e.}, u(t) \in U ~\mathrm{a.e.} \qquad (1)$ For the shape of soap bubbles, nature does the controlling, so to speak. But the insight of Goldschmidt prefigures two of the central topics in control theory, a descendant of the calculus of variations: existence and regularity.

Let’s examine these issues for an optimal control problem that arises in modeling renewable resources. There are two state variables: $$x$$ (which we may think of as a measure of a fish population) and $$y$$ (available boats, which are subject to depreciation). We choose directly two controls: $$u(t)$$ (boats sent out at time $$t$$ to catch fish, a value between $$0$$ and $$y(t))$$, and $$I(t)$$ (investment at time $$t$$ in new boats). It is assumed that investment can have immediate effect; thus $$I$$ may be an impulse control. The dynamics linking the choice of the control functions $$(u, I)$$ to the resulting states $$(x, y)$$ (that is, the function $$f$$ in $$(1)$$, as well as the (net infinite-horizon discounted) return that one seeks to maximize, will not be made explicit here; see [1,3]. Instead, let’s take a look at what turns out to be the answer. It has the form of a feedback synthesis, in which the choice of control values depends on the current state.

The optimal synthesis is indicated in Figure 3, for the case in which the initial values of $$x$$ (fish) and $$y$$ (boats) lie at the point designated by the letter $$a$$ (many fish, few boats). It makes sense to invest in boats (at a certain cost, of course) from such a point: we move to the point $$b$$ via an impulse purchase. (The dotted curve, whose provenance  is explained later, determines $$b$$.)  Subsequently, between $$b$$ and $$c$$, we use all the boats we have $$(u = y)$$; the number of boats is decreasing through wear and tear (depreciation). Once $$c$$ is reached, there is a change of tactic: we cease to use all available boats $$(u < y)$$ in order to maintain the fish population at a certain level $$x_S$$.

Figure 3. An optimal feedback synthesis.

As experience shows, some economists will grumble at this stage that it must have been wasteful to buy so many boats initially, since some are not being used now. However, the level $$x_S$$, it turns out, corresponds to a short-term equilibrium that is a known (and economically accepted) feature of the solution when investment and depreciation are absent; so there is an economic argument for it. At the point $$d$$, however, where the boat level $$y_S$$ is attained, economic intuition is even more seriously challenged: We change our minds about maintaining the fish level at $$x_S$$ and return  to using all available boats $$(u = y)$$, even though this drives $$x$$ below the level $$x_S$$ that we had been respecting. Hmm. . .

Later, the value of $$x$$ eventually returns to $$x_S$$  (at point $$e$$), boats having sufficiently depreciated to allow this. Subsequently we tolerate a low fleet level, using all boats, allowing $$x$$ to increase beyond $$x_S$$ to a certain value $$x_L$$, at point $$f$$. Then we make an immediate purchase of boats to arrive at the point $$g$$ defined by $$y = y_L$$, following which we employ a constant level of continuous investment in order to remain, happily ever after, at $$g$$. The value $$x_L$$ is revealed to be the long-term optimal stock level which one attains by a circuitous route.

To assert the optimality of this scheme without proof, given its various counter-intuitive aspects, simply won’t do. (A strictly numerical solution wouldn’t yield much insight.) How to produce such a proof? If we know a priori that our optimal control problem does have a solution (existence theory), if we dispose of rigorously true necessary conditions that apply to the problem at hand, and if we can analyze them to deduce the above strategy, then that would constitute a satisfactory proof by the deductive method. It is a fact that many optimization problems can be solved this way, which explains the very practical importance of existence theorems, as well as necessary conditions that are fully proved under precise hypotheses.

The ingredients of the deductive approach are lacking here. But informal use of the necessary conditions known as the Pontryagin maximum principle leads to the various dotted lines in Figure 3, which can then be used to construct the solution on a speculative basis. It needs to be confirmed, however. There is a famous inductive method for doing this, that of verification functions (see [5]). Given a proposed solution (found, perhaps, by guesswork or dubious means), it is based upon finding a function that satisfies the Hamilton–Jacobi partial differential equation (or inequality) and that is related to the putative solution in a certain way. Then the very existence of this function verifies that the proposed solution is correct. There is a hitch, however, and it’s a question of regularity. Generally, in control (as in this example), the verification function will need to be nonsmooth. Then the solution concept for the PDE necessarily involves generalized derivatives;  such topics form part of the subject often referred to as nonsmooth analysis (see [2,5]).

We observe that in the optimal strategy found above, the dependence of the optimal control values on the current state (the feedback law) is discontinuous. There’s nothing unusual about that;  it has been a feature of optimal control since the beginning of the theory (that is, since the 1950s). Engineers, though, tend to look askance at discontinuous feedbacks, for various reasons. Their intuition tells them that they result from demanding the very best solution to a problem and that by settling for more reasonable (suboptimal) feedbacks, or by approximating, it will always be possible to use continuous feedbacks. Their experience in linear systems theory, that highly successful bedrock of engineering control design, bolsters this thinking. It is a surprising revelation of recent years, however, that this intuition is not fully correct, once one strays from the classic linear setting, as sometimes one must. Let’s be a bit more specific, by looking at what is arguably the most basic issue in control systems: the design of stabilizing feedback.

Suppose that the control system $$(1)$$ is nicely controllable to the origin (in a certain sense). Is it then stabilizable by feedback? Translation: Is there a function $$u(x)$$ taking values in the control set $$U$$ so that the differential equation $$x'(t) = f(x(t), u(x))$$ is stable (its trajectories go to 0)? Note that no optimality of any kind is involved here; we simply require a “reasonable” feedback law that will have the effect of driving the system automatically to zero. Nonetheless, it turns out that, in general, we must have recourse to discontinuous feedback functions $$u(x)$$ in order to achieve stabilization. This is so even for such simple, bilinear, mechanically relevant systems as the classical nonholonomic integrator. It is true that there are potential pitfalls in using discontinuous feedbacks. And their implementation must certainly be carefully studied. (The feedback law may be discontinuous,  but no actual physical motion of the system itself will be, of course.) But discontinuous  feedbacks can offer some real advantages [4,6].

Figure 4. A simple circuit with a diode.
So far, we have seen problems where irregularity arises indirectly: the solution has corners, the verification function is nonsmooth, and the optimal synthesis or the stabilizing feedback is discontinuous. Nonsmoothness also can arise directly, as an intrinsic part of the problem from the start. Consider, for example, the simplest possible RC electrical network (see Figure 4), but replace the resistor with a diode – that is, a resistor for which the proportionality constant in Ohm’s law depends on the direction of the current. The function $$f$$ in the differential equation $$(1)$$ describing this circuit is nondifferentiable (it is an exercise to show this). Another (more hidden) example of intrinsic nonsmoothness  arises in the well-known engineering problem of minimizing the maximum eigenvalue of a matrix (relative to some of its entries). Again, this function, and others arising in optimal design, will generally be nonsmooth [2].

In summary, we have seen why it is crucial to be able to solve certain control problems analytically, that intuition cannot always be counted on, and how existence is a central issue. And we have seen various ways, both indirect and intrinsic, in which nonsmoothness arises and is unavoidable. Is this to be deplored? Not at all, for it is a fact of nature; to quote the Old Testament: Consider God’s handiwork: who can straighten what He hath made crooked? [Ecclesiastes 7:13].

Francis Clarke is the recipient of the 2015 W.T. and Idalia Reid Prize. This article is adapted from the Reid Prize lecture he delivered on July 9 in Paris at the SIAM Conference on Control and its Applications.

References
[1] C.W. Clarke, F. Clarke, and G.R. Munro, The optimal exploitation of renewable resource stocks, Econometrica, 47 (1979), 25-47.

[2] F. Clarke, Optimization and Nonsmooth Analysis, SIAM, Philadelphia, 1990.

[3] F. Clarke, Methods of Dynamic and Nonsmooth Optimization, SIAM, Philadelphia, 1989.

[4] F. Clarke, Lyapunov functions and discontinuous stabilizing feedback, Annu. Rev. Control, 35 (2011), 13-33.

[5] F. Clarke, Functional Analysis, Calculus of Variations and Optimal Control, Springer-Verlag, London, 2013.

[6] F. Clarke, Y.S. Ledyaev, R.J. Stern, and P.R. Wolenski, Nonsmooth Analysis and Control Theory, Springer-Verlag, New York, 1998.

Francis Clarke is a professor in the Institut Camille Jordan at the Université de Lyon.