## Verifying Stability of Stochastic Systems

July 3, 2011 Leave a comment

I just finished presenting my recent paper on stochastic verification at RSS 2011. There is a conference version online, with a journal article to come later. In this post I want to go over the problem statement and my solution.

**Problem Statement**

Abstractly, the goal is to be given some sort of description of a system, and of a goal for that system, and then verify that the system will reach that goal. The difference between our work and a lot (but not all) of the previous work is that we want to work with an explicit noise model for the system. So, for instance, I tell you that the system satisfies

where represents the nominal dynamics of the system, represents how noise enters the system, and $dw(t)$ is a standard Wiener process (the continuous-time version of Gaussian noise). I would like to, for instance, verify that for some function and some final time . For example, if is one-dimensional then I could ask that , which is asking for to be within a distance of of the origin at time . For now, I will focus on time-invariant systems and stability conditions. This means that and are not functions of , and the condition we want to verify is that for all . However, it is not too difficult to extend these ideas to the time-varying case, as I will show in the results at the end.

The tool we will use for our task is a *supermartingale*, which allows us to prove bounds on the probability that a system leaves a certain region.

**Supermartingales**

Let us suppose that I have a non-negative function of my state such that for all and . Here we define as

Then, just by integrating, we can see that . By Markov’s inequality, the probability that is at most .

We can actually prove something stronger as follows: note that if we re-define our Markov process to stop evolving as soon as , then this only sets to zero in certain places. Thus the probability that for this new process is at most . Since the process stops as soon as , we obtain the stronger result that the probability that for *any* is at most . Finally, we only need the condition to hold when . We thus obtain the following:

**Theorem.** Let be a non-negative function such that whenever . Then with probability at least , for all .

We call the condition the *supermartingale condition*, and a function that satisfies the martingale condition is called a *supermartingale*. If we can construct supermartingales for our system, then we can bound the probability that trajectories of the system leave a given region.

NOTE: for most people, a supermartingale is something that satisfies the condition . However, this condition is often impossible to satisfy for systems we might care about. For instance, just consider exponential decay driven by Gaussian noise:

Once the system gets close enough to the origin, the exponential decay ceases to matter much and the system is basically just getting bounced around by the Gaussian noise. In particular, if the system is ever at the origin, it will get perturbed away again, so you cannot hope to find a non-constant function of that is decreasing in expectation everywhere (~~just consider the global minimum of such a function: in all cases, there is a non-zero probability that the Gaussian noise will cause to increase, but a zero probability that will decrease because we are already at the global minimum~~ this argument doesn’t actually work, but I am pretty sure that my claim is true at least subject to sufficient technical conditions).

**Applying the Martingale Theorem**

Now that we have this theorem, we need some way to actually use it. First, let us try to get a more explicit version of the Martingale condition for the systems we are considering, which you will recall are of the form . Note that .

Then . A Wiener process satisfies and , so only the nominal dynamics () affect the limit of the first-order term while only the noise () affects the limit of the second-order term (the third-order and higher terms in all go to zero). We thus end up with the formula

It is not that difficult to construct a supermartingale, but most supermartingales that you construct will yield a pretty poor bound. To illustrate this, consider the system . This is the example in the image from the previous section. Now consider a quadratic function . The preceding formula tells us that . We thus have , which means that the probability of leaving the region is at most . This is not particularly impressive: it says that we should expect to grow roughly as , which is how quickly would grow if it was a random walk with no stabilizing component at all.

One way to deal with this is to have a state-dependent bound $\mathbb{E}[\dot{V}] \leq c-kV$. This has been considered for instance by Pham, Tabareau, and Slotine (see Lemma 2 and Theorem 2), but I am not sure whether their results still work if the supermartingale condition only holds locally instead of globally; I haven’t spent much time on this, so they could generalize quite trivially.

Another way to deal with this is to pick a more quickly-growing candidate supermartingale. For instance, we could pick . Then , which has a global maximum of at $x = \frac{\sqrt{3}}{2}$. This bounds then says that grows at a rate of at most , which is better than before, but still much worse than reality.

We could keep improving on this bound by considering successively faster-growing polynomials. However, automating such a process becomes expensive once the degree of the polynomial gets large. Instead, let’s consider a function like . Then , which has a maximum of 0.5 at x=0. Now our bound says that we should expect x to grow like , which is a much better growth rate (and roughly the true growth rate, at least in terms of the largest value of over the time interval ).

This leads us to our overall strategy for finding good supermartingales. We will search across functions of the form where is a matrix (the means “positive semidefinite”, which roughly means that the graph of the function looks like a bowl rather than a saddle/hyperbola). This begs two questions: how to upper-bound the global maximum of for this family, and how to search efficiently over this family. The former is done by doing some careful work with inequalities, while the latter is done with semidefinite programming. I will explain both below.

**Upper-bounding **

In general, if , then . We would like to show that such a function is upper-bounded by a constant . To do this, move the exponential term to the right-hand-side to get the equivalent condition . Then we can lower-bound by and obtain the sufficient condition

It is still not immediately clear how to check such a condition, but somehow the fact that this new condition only involves polynomials (assuming that f and g are polynomials) seems like it should make computations more tractable. This is indeed the case. While checking if a polynomial is positive is NP-hard, checking whether it is a **sum of squares** of other polynomials can be done in polynomial time. While sum of squares is not the same as positive, it is a sufficient condition (since the square of a real number is always positive).

The way we check whether a polynomial p(x) is a sum of squares is to formulate it as the semidefinite program: , where is a vector of monomials. The condition is a set of affine constraints on the entries of , so that the above program is indeed semidefinite and can be solved efficiently.

**Efficiently searching across all matrices S**

We can extend on the sum-of-squares idea in the previous section to search over . Note that if is a parameterized polynomial whose coefficient are affine in a set of decision variables, then the condition is again a set of affine constraints on . This almost solves our problem for us, but not quite. The issue is the form of in our case:

Do you see the problem? There are two places where the constraints do not appear linearly in the decision variables: and multiply each other in the first term, and appears quadratically in the last term. While the first non-linearity is not so bad ( is a scalar so it is relatively cheap to search over exhaustively), the second non-linearity is more serious. Fortunately, we can resolve the issue with Schur complements. The idea behind Schur complements is that, assuming , the condition is equivalent to . In our case, this means that our condition is equivalent to the condition that

where is the identity matrix. Now we have a condition that is linear in the decision variable , but it is no longer a polynomial condition, it is a condition that a matrix polynomial be positive semidefinite. Fortunately, we can reduce this to a purely polynomial condition by creating a set of dummy variables and asking that

We can then do a line search over and solve a semidefinite program to determine a feasible value of . If we care about remaining within a specific region, we can maximize such that implies that we stay in the region. Since our bound on the probability of leaving the grows roughly as , this is a pretty reasonable thing to maximize (we would actually want to maximize , but this is a bit more difficult to do).

Oftentimes, for instance if we are verifying stability around a trajectory, we would like to be time-varying. In this case an exhaustive search is no longer feasible. Instead we alternate between searching over and searching over . In the step where we search over , we maximize . In the step where we search over , we maximize the amount by which we could change and still satisfy the constraints (the easiest way to do this is by first maximizing , then minimizing , then taking the average; the fact that semidefinite constraints are convex implies that this optimizes the margin on for a fixed ).

A final note is that systems are often only stable locally, and so we only want to check the constraint in a region where . We can do this by adding a *Lagrange multiplier* to our constraints. For instance, if we want to check that whenever , it suffices to find a polynomial such that and . (You should convince yourself that this is true; the easiest proof is just by casework on the sign of .) This again introduces a non-linearity in the constraints, but if we fix and then the constraints are linear in and , and vice-versa, so we can perform the same alternating maximization as before.

**Results**

Below is the most exciting result, it is for an airplane with a noisy camera trying to avoid obstacles. Using the verification methods above, we can show that with probability at least that the plane trajectory will not leave the gray region: