An Introduction to Stability

Note: This post is adapted from a lecture I gave to my undergrads. It’s focused on presenting the basics of stability for dynamical systems. The big question this lecture is intended to answer is what will
our system do if we perturb it a small amount? This is not a rigorous treatment and in some locations, I have traded being technically correct for being clear. If you want a rigorous treatment of the material I suggest a
textbook.

Linear Systems

When our dynamics are linear, we can always write our state space in the following form

\dot{\vec{X}}(t)=A(t)\vec{X}(t) + B(t)\vec{u}(t)

where \vec{X} is an n-by-1 state vector, A is a n-by-n state matrix , \vec{u} is a m-by-1 vector of controls, and B is a n-by-m input matrix.

m\ddot{x}=-k_1x-c_1\dot{x}+F
\vec{X}=\begin{bmatrix} x \\ \dot{x} \end{bmatrix}\quad,\quad \dot{\vec{X}}=\begin{bmatrix} \dot{x} \\ \ddot{x} \end{bmatrix}=\begin{bmatrix} \dot{x} \\ -\frac{k_1 x}{m}-\frac{c_1\dot{x}}{m}+\frac{F}{m} \end{bmatrix}

\dot{\vec{X}}=\begin{bmatrix} 0 & 1 \\ \frac{-k_1}{m}& \frac{-c_1}{m} \end{bmatrix}\begin{bmatrix} x\\ \dot{x} \end{bmatrix} + \begin{bmatrix} 0\\ \frac{1}{m} \end{bmatrix} \begin{bmatrix} F \end{bmatrix}

Linear Stability

For the rest of this lecture, let’s pretend that our system has no controls. This leaves us with the reduced state space
\dot{\vec{X}}(t)=A(t)\vec{X}(t)
We can determine that our system’s stability by analyzing the eigenvalues, \vec{\lambda} of the state matrix
det(A- \lambda I)=0
where I is an n-by-n identity matrix.

Eigenvalue practice

A=\begin{bmatrix} 0 & 1 \\ \frac{-k_1}{m} & \frac{-c_1}{m} \end{bmatrix}
|A-\lambda I|=0
\bigg|\begin{bmatrix} 0 & 1 \\ \frac{-k_1}{m} & \frac{-c_1}{m} \end{bmatrix}-\lambda\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}\bigg| \rightarrow \bigg|\begin{bmatrix} -\lambda & 1 \\ \frac{-k_1}{m} & \frac{-c_1}{m}-\lambda \end{bmatrix}\bigg| = 0
\lambda^2+\frac{c_1}{m}\lambda + \frac{k_1}{m}=0
x=\frac{-b\pm\sqrt{b^2-4ac}}{2a}
\lambda = \frac{1}{2}\bigg(-\frac{c_1}{m}\pm\sqrt{\frac{c_1^2}{m^2}-4\frac{k_1}{m}} \quad\bigg)

(A-\lambda I)\vec{V}=\vec{0} \quad,\quad |\vec{V}|=1

Understanding eigenvalues

Now, each eigenvalue corresponds to a different behavior, and each eigenvector corresponds to a different mode

Positive Real Portion

An unstable system that goes to infinity in finite time (Technical term: Blow’s up)
\lambda_i = c_1 \Re \pm c_2 \Im \quad,\quad s.t. \quad c_1>0

Zero Real Portion some imaginary portion

This is called Marginally stable, never gonna stop, never gonna blow up, never gonna give you up
\lambda_i = 0 \Re \pm c_2 \Im

Negative Real Portion no imaginary portion

Stable (and simple)
\lambda_i = c_1 \Re \pm 0 \Im \quad,\quad s.t. \quad c_1<0

Negative Real Portion some imaginary portion

Stable (and oscillatory)
\lambda_i = c_1 \Re \pm c_2 \Im \quad,\quad s.t. \quad c_1<0


Note: Eigenvalues with imaginary components always come in pairs of \pm c_2 \Im

Multiple Eigenvalues

If you have multiple eigenvalues, your system is dominated by the eigenvalue with the largest real portion. If the largest real portion is positive, your entire system is unstable. If your largest real portion of the eigenvalues is 0 your system is marginally stable. If your largest real portion of the eigenvalue is negative, the system is stable.

System of equations practice


F_1 = -k_1x_1 - c_1\dot{x}_1 + k_2(x_2-x_1) + c_2(\dot{x}_2-\dot{x}_1)
F_2 =-k_2(x_2-x_1) -c_2(\dot{x}_2-\dot{x}_1) + F
Assume $m_1=m_2=1$
\dot{\vec{X}}=\begin{bmatrix} 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1\\ -k_1-k_2 & k_2 & -c_1-c_2 & c_2\\ k_2 & -k_2 & c_2 & -c_2 \end{bmatrix}\begin{bmatrix} x_1\\x_2\\ \dot{x}_1\\ \dot{x}_2 \end{bmatrix} + \begin{bmatrix} 0\\0\\0\\1 \end{bmatrix}F
Assume that k_1=k_2=c_1=c_2=1
A=\begin{bmatrix} 0&0&1&0\\ 0&0&0&1\\ -2&1 &-2 &1\\ 1&-1 &1 &-1 \end{bmatrix}
\lambda = -1.309 \pm .9511 i \quad,\quad -0.1910 \pm 0.5878 i

Connecting Eigenvalues to a solution of ODE’s

We’ve seen that there’s a connection on which relates the eigenvalues of the state matrix and the stability of the system, but why is that? Let’s start this whole explanation off with a simple infinite series
f(x):=\sum_{k=1}^{\infty}\frac{x^k}{k!} = 1 + x + \frac{x^2}{2}+\frac{x^3}{6}+\hdots
Now let’s take it’s derivative with respect to x
f'(x)= 1 + x + \frac{x^2}{2}+\frac{x^3}{6}+\hdots
The derivative of f is also f!
\frac{df(x)}{dx}=f(x)
That’s such a nice property that we gave this function name e
e^x = f(x) = \frac{de^x}{dt} = f'(x) = exp(x)
Now, let’s return to differential equation land. The simplest linear differential equation is the following
\frac{dy(x)}{dt}=y(x)
Does this look familiar? Yes, it’s the same as our above equation (with some constants added), so we can say
y(x)=c_1e^{c_2x}+c_3
Note: there’s this thing called a uniqueness proof which shows that for a linear ODE, this is the only solution. I wont go over it here, but just know we can rigorously prove that the exponential is the basic solution to a linear ODE.

Now that we have the solution for a single linear ODE, what about for system of linear ODEs? Instead of there just being a single exponential, the solution is the linear superposition of several of these base solutions
\vec{X}(t) = \sum_{i=1}^{j}c_ie^{\lambda_i t}\vec{V}_i,
where \lambda_i, is the i’th eigenvalue, and \vec{V}_i, the i’th eigenvector.

Nonlinear Stability

Exploring stability, like most dynamics, is easy to do in linear systems. Extending it out to non-linear systems is less so. First we must define the concept of stationary points

Stationary Points

Stationary points are defined as states that satisfy the following condition
\dot{\vec{X}}=\vec{0}

Stationary Points Practice


For the simple pendulum we have the following equation of motion, where all constants have been set to 1
\ddot{\theta}=-sin(\theta)
This gvies us the following nonlinear state space
\vec{X}=\begin{bmatrix} \theta\\\dot{\theta} \end{bmatrix} \quad,\quad \dot{\vec{X}}=\begin{bmatrix} \dot{\theta}\\-sin(\theta) \end{bmatrix}
setting the derivative of the state vector to 0 we get the following two equations
\dot{\theta}=0
sin(\theta)=0 \rightarrow \theta = \{n\pi\mid n\subset \mathbb{Z}\}
Note there are an infinite number of stationary points for the humble pendulum, and we can mark each stationary point as \vec{X}_i.

Linearization

Let’s expand the derivative of our state vector about these stationary points using a Taylor series and then truncate the second order and higher terms

\dot{\vec{X}}(\vec{X}) \approx \dot{\vec{X}}|_{\vec{X}_i} + \nabla\dot{\vec{X}}|_{\vec{X}_i}\big(\vec{X}-\vec{X}_i\big) + \mathcal{O}(\vec{X}^2)

where |_{\vec{X}_i} indicates that you evaluate that term at stationary point i. Now, at least for the region about that stationary point, we obtain the following relationship

A\approx\nabla\dot{\vec{X}}|_{\vec{X}_i}

We can now use the tools we developed for linear systems to determine the stability of stationary points.

Caveats About Linearization

Note: Because the linearization is only an approximation of the state matrix, you have to be aware of the following

  • Linearization makes the assumption that we can approximate the derivative of the state vector by ignoring higher order terms. This is not always true! Explore the following example on your own
    \dot{x}=xy \quad,\quad \dot{y}=x^2-y
  • This only covers a region close to the stationary point. How close? That depends on how important those higher-order terms we truncated are.
  • While linearization can tell us if a stationary point is stable or unstable, if there is no real component (aka marginally stable), the results are inconclusive. The higher order terms can tip the system towards or away from stability.

Linearization Practice

Returning to our simple pendulum, let’s practice linearizing the system
\nabla\dot{\vec{X}}=\begin{bmatrix} \frac{\partial \dot\theta}{\partial \theta} & \frac{\partial \dot\theta}{\partial\dot\theta} \\ \frac{\partial \ddot\theta}{\partial \theta} & \frac{\partial \ddot\theta}{\partial\dot\theta} \end{bmatrix} = \begin{bmatrix} 0 & 1 \\-cos(\theta) & 0 \end{bmatrix}
Now, let’s get the eigenvalues when our pendulum is completely down \theta=0, and our pendulum is upright \theta=\pi
\lambda_{\theta=0}=\pm 1i,
which is marginally stable, so we would need a more advanced technique to determine the actual stability, but due to the nature of the symmetry we’ll find that it is truly marginally stable, and
\lambda_{\theta=0}=\pm 1,
which is clearly unstable as expected.

Want More Gereshes?

If you want to receive the weekly Gereshes blog post directly to your email every Monday morning, you can sign up for the newsletter here! Don’t want another email? That’s ok, Gereshes also has a twitter account and subreddit!

2 Comments

  1. Millard Wyman

    What is the meaning of c2 (where does c2 come from) in the first example?

    • admin

      In the eigenvalues portion of the post, c2 is just some number. It’s just there to differentiate it from c1 as the real and imaginary components don’t have to have the same constant.

Comments are closed.