SIAM News Blog
SIAM News
Print

Energy and Phased-Based Gait Recovery

By George Council and Shai Revzen

Animals quickly recover the ability to move after mild injury. For example, legged creatures start to limp almost immediately after minor leg injuries. The animal’s previous experience with walking or running presumably facilitates this rapid recovery. Legged robots (or any mobile robots) would be transparently advantaged if they possessed similar abilities. Researchers have conducted related work on robot recovery. For example, Culley et al. used an intelligent trial-and-error method on a walking robot to explore a pre-computed set of acceptable behaviors; and Bongard, Zykov, and Lipson developed a walking robot that recovered from damage by compensating with continuous self-modeling. These methods remodel the robot after damage has occurred, which is both time and information-intensive.

Robots are generally subjected to constraints that inform their movement, e.g., we assume that a wheeled robot cannot slide sideways. We present a method to rapidly recover behaviors using information about these constraints recorded from successful movement, thus avoiding the need to model damage.

Damage compensation occurs in two steps. First, we consolidate constraints; this step describes the goal behavior in a lower-dimensional way than the robot’s fully trajectory. High-fidelity models tend to be very high-dimensional, and require many variables to accurately predict the dynamics. The use of fewer variables is preferable as less data is needed. We then reconstruct prior measurements of these constraints using control. The resulting trajectory preserves the desired low-dimensional behavior by exploiting motive redundancy in robots.

If a robot experiences redundancy in actuation (in that many control inputs achieve the same outcome), one can utilize control to generate a new trajectory for the damaged system that is equivalent in this consolidated sense. For example, the robot depicted in the Figure 1 pulls itself along by placing two feet at fixed points, dragging itself forward, and repeating these motions. Each arm consists of a four-bar linkage comprised of four rigid bars that are connected end-to-end by powered swivel joints. It should be intuitively reasonable (and it is also true) that infinitely many possible choices of angles for the limbs place the feet in the same locations. If one joint is stuck, the redundancy of the other joints can compensate. While the presence of more joints potentially provides redundancy, it also complicates the control problem; more joints means that more control inputs must be defined, and more degrees of freedom means that more things can go wrong.

Figure 1. Our “crawler” robot (pictured from above) over time. The thick red and blue bars outline the body, and the black curve designates the center of mass over time. Each thin line is a limb, with actuators at every joint. The green xs are the foot locations, and the dashed lines indicate the position of each joint over time.

Specifically, one can consolidate the constraints with a nonlinear projection (submersion) that aggregates state variables into a smaller set. We project the state onto a virtual low-dimension dynamical system that can perform our desired action. Ergo, any path of the original system that projects to the same low-dimensional course achieves identical behavior, even though the individual trajectories may differ. If the robot is damaged and the original trajectory is no longer physically possible, we can redesign via the redundancy offered by many actuators; this may help us attain an equivalent trajectory.

As this projection is chosen, we must select one that is suitable. We presumably choose it so that we know the desired motion in the projected coordinates, but we must also design a satisfactory input for the full robot to yield the specified projected curve. In the case of a damaged robot, we assume that we have no model since the robot is not working. We also (either as observers to an autonomous robot or engineers attempting to remotely reprogram one) do not have information to predict how the damaged robot will move when subject to inputs that we have not tried out.

We are bailed out by our second step: using data. We probably know how to evaluate our nonlinear projection, but we do not know how to design an input to achieve the correct projected curve (and thus the desired behavior). We elected to use a collection of observation functions in short, real-valued measurements. There is an open and dense set (loosely, “most”) of such functions that one can choose as constraints. We selected these functions ahead of time and evaluated them on the working robot. We then formulated our control problem to preserve the value of these functions for the damaged robot. Doing so fully defines the desired projected curve and—importantly—can be evaluated on trial trajectories without model information; they correspond to instantaneous sensor readings rather than a predictive model.

Ultimately, we have defined a collection of observation functions that fully describe a desired motion. However, this collection is smaller in cardinality than the robot’s dimensions. By using control to maintain the time evolution of these functions between the undamaged and damaged system, we preserve a desired motion.

We have reduced a dynamics problem on a high-dimensional system to an algebraic problem on a low-dimensional system and can naturally formulate a regression problem, i.e., find the best input, such that the constraints are best-fit. Using hardware-in-the-loop optimization techniques on a six-legged robot, we demonstrate that our technique is able to generate a reasonable forward motion in 36 iterations — a relatively small amount compared to model-based simulation studies (see Figure 2). A small number of iterations ensures that a technique is applicable to real-world systems without extensive model information, as each iteration requires the physical motion of the robot, which introduces mechanical wear and takes finite time.

Figure 2. Results from our hardware optimization. The red dots use our constraint strategy to recover the desired motion of “walk forward.” The blue dots represent an optimization conducted directly on the input space without constraint. The top set indicates performance prior to recovery, and the bottom denotes post-recovery. We see that the red dots shift right—which specifies further walking—while the blue dots effectively do not move, indicating failure to recover. An important point is that N=36. Comparatively, very few iterations were needed for recovery. The contours are from a kernel density fit where the points were interpolated into a continuous function for comparison.

By representing our control problem as one of lifting algebraic constraints from use of a data-driven, reduced-order model of an unknown dynamical system, we avoid the need to model the broken system. Similarly, while one might be tempted to apply machine learning techniques to the hardware, approaches that attempt to directly explore the entire input space are likely to fail since our method’s redundancy confounds these other approaches. Each degree of freedom is another dimension that an unconstrained algorithm would have to explore.

We hope that our method will help legged robots gain a serious degree of autonomy for exploration of challenging environments. We also hope that it articulates the fact that many degrees of freedom—historically a vexing feature of robot-motion-planning algorithms due to the increased complexity—is actually a benefit that one can exploit without significant computational or modeling overhead.


The authors presented this research during a minisymposium at the 2019 SIAM Conference on Applications of Dynamical Systems, which took place last month in Snowbird, Utah.

The authors would like to thank their funding sources for support of this project:

  • ARO W911NF-14-1-0573: “Morphologically Modulated Dynamics”
  • ARO W911NF-17-1-0243 (a DURIP): “DRAKE: Dynamics, Robotics, and Kinematics Experiments (DRAKE): measuring fast legged robots as oscillators”
  • Sub-award from University of California, Santa-Barbara ARO MURI W911NF-17-1-0306: “From Data-Driven Operator Theoretic Schemes to Prediction, Inference, and Control of Systems”
  • NSF CMMI 1825918: “Collaborative Research: Geometrically-Optimal Gait Optimization”
George Council  is a Ph.D. candidate at the Biologically Inspired Robotics and Dynamical Systems (BIRDS) Lab at the University of Michigan. He holds an undergraduate degree in electrical engineering from Montana State University and will occasionally lay claim to that title, but his interests range from abstract math to field ecology. His current focus is on control theory and applied mathematics, with frequent forays into robotic hardware. 
Shai Revzen is the principal investigator of the Biologically Inspired Robotics and Dynamical Systems (BIRDS) Lab at the University of Michigan. His research interests focus on the study of bio-inspired robotics and new methods and mechanisms for control, scientific study of animal and human locomotion based on nonlinear dynamical systems, and application to design of legged robotic vehicles and other devices.
blog comments powered by Disqus