In the following theorem A,P,Q in R^{n times n}, and P and Q are symmetric. The notation P>0 means that the matrix P is positive definite. Given any Q>0, there exists a unique P>0 satisfying A^TP+PA+Q=0 if and only if the linear system dotx=Ax is globally asymptotically stable. The quadratic function V(z)=z^T Pz is a Lyapunov function that can be used to verify stability.

Kyler Oconnor

Kyler Oconnor

Answered question

2022-11-14

Lyapunov equation for stability analysis - what's the point?
In the following theorem A , P , Q R n × n , and P and Q are symmetric. The notation P > 0 means that the matrix P is positive definite.
Given any Q > 0, there exists a unique P > 0 satisfying A T P + P A + Q = 0 if and only if the linear system x ˙ = A x is globally asymptotically stable. The quadratic function V ( z ) = z T P z is a Lyapunov function that can be used to verify stability.
Recently I became quite familiar with tools to solve Lyapunov equation A T P + P A + Q = 0 and obtain P. However in the context of stability analysis I see one huge problem. It's that you still have to check that P is positive definite or equivalently that its eigenvalues are positive. But for the linear system the fact that all eigenvalues of A have negative real parts ensures that the system is globally asymptotically stable in the first place.
So what's the point of going through all the hassle just to end up with looking for the eigenvalues (P and A are of the same size!) anyway? Does the symmetric matrix make such a big difference?

Answer & Explanation

Cseszteq5j

Cseszteq5j

Beginner2022-11-15Added 17 answers

Step 1
The problem of finding eigenvalues for non-hermitian (and non-symmetric) matrices is very unstable; it is quite hard to solve numerically (close to impossible) and can't be solved in radicals for matrices of size at least 5.
Finding eigenvalues for hermitian (symmetric) matrices is a much easier problem - and can be reliably solved numerically.
For asymptotic stability, we don't need the exact values of eigenvalues, we only need that their real part are strictly negative. The existence of P = P > 0 satisfying the Lyapunov equation is a sufficient criterion.
We have explicit formulas for solutions of the Lyapunov equations.
The Lyapunov equation is linear in P (if we are talking about real matrices), hence, we obtain a large system of linear equations, so we can solve it numerically.
We are not looking for eigenvalues of P: we can use any other criterion of positive definness: for example, via main minors.
The most important one: we have estimations on the norm ∥x(t)∥ is terms of the matrices P and Q. A mere study of eigenvalues of A (given that we can find them...) does not give such information (we could use the eigenvectors to obtain that info, but hte problem would become even harder).
Step 2
The last result is quite standard for Lyapunov stability. Let x(t) be a solution of the differential equation. Then we can write
( x ( t ) P x ( t ) ) = ( x ( t ) ) P x ( t ) + x ( t ) P x ( t ) = x ( t ) ( A P + P A ) x ( t ) = x ( t ) Q X ( t ) q x ( t ) 2 q P x ( t ) P x ( t ) ,
where q > 0 is the minimal eigenvalue of Q.
The Gronwall's lemma gives us the inequality
x ( t ) P x ( t ) exp ( q t P ) x ( 0 ) P x ( 0 ) P exp ( q t P ) x ( 0 ) 2 .
One the other hand, if p > 0 is the minimal eigenvalue of P, then
p x ( t ) 2 x ( t ) P x ( t ) P exp ( q t P ) x ( 0 ) 2 ,
hence
x ( t ) 2 P p exp ( q t P ) x ( 0 ) 2 .
The term P p is nothing but the conditionment number of the matrix P:
c o n d ( P ) = P P 1 = P / p

Do you have a similar question?

Recalculate according to your conditions!

Ask your question.
Get an expert answer.

Let our experts help you. Answer in as fast as 15 minutes.

Didn't find what you were looking for?