Skip to main content

Handout Lesson 29, Matrix Exponentials and Linear Systems

Textbook Section(s).

This lesson is based on Section 5.6 of your textbook by Edwards, Penney, and Calvis.

\(2\times 2\) inverses.

Before we jump into this lesson, just a quick reminder about the inverses of \(2 \times 2\) matrices. If \(\det(M) \neq 0\) and
\begin{align*} M= \left[ \begin{array}{cc} a \amp b \\ c \amp d \end{array} \right] \end{align*}
is a \(2 \times 2\) matrix, then the inverse of \(M\text{,}\) denoted \(M^{-1}\text{,}\) is
\begin{align*} M^{-1}=\frac{1}{\det(M)} \left[ \begin{array}{cc} d \amp -b \\ -c \amp a \end{array} \right] \end{align*}

Fundamental Matrices.

In this lesson, we continue to explore solutions of homogeneous systems of linear differential equations with constant coefficients. In other words, we are trying to solve
\begin{gather} \mathbf{x'}=\mathbf{A}\mathbf{x}\tag{✢} \end{gather}
where \(\mathbf{A}\) is and \(n\times n\) matrix of constants and \(\mathbf{x}\) is a vector of functions of \(t\text{.}\)
Recall:
  • By the Remark in section 5.1 after Theorem 3 in your textbook by Edwards, et.al., (✢) has \(n\) linearly independent solutions \(\mathbf{x_1}(t), \mathbf{x_2}(t), \dots, \mathbf{x_n}(t)\text{.}\)
  • Every solution \(\mathbf{x}(t)\) of (✢) is a linear combination of \(\mathbf{x_1}(t), \mathbf{x_2}(t), \dots, \mathbf{x_n}(t)\text{.}\)

Definition 181.

Let \(\mathbf{x_1}(t), \mathbf{x_2}(t), \dots, \mathbf{x_n}(t)\) be \(n\) linearly independent solutions of
\begin{gather} \mathbf{x'}=\mathbf{A}\mathbf{x}\tag{#} \end{gather}
The matrix
\begin{align*} \mathbf{\Phi}(t)= \left[ \begin{array}{cccc} | \amp | \amp \amp | \\ \mathbf{x_1}(t) \amp \mathbf{x_2}(t) \amp \dots \amp \mathbf{x_n}(t) \\ | \amp | \amp \amp | \end{array} \right] \end{align*}
is called a fundamental matrix for the system (#).

Example 182. Fundamental matrices.

Consider the system of differential equations
\begin{align} \mathbf{x'}= \left[ \begin{array}{cc} 2 \amp -1 \\ 4 \amp -2 \end{array} \right] \mathbf{x}\tag{†} \end{align}
  1. Find a fundamental matrix, \(\mathbf{\Phi}(t)\) for (†).
  2. Verify that \(\mathbf{\Phi}(t)\) is a solution to
    \begin{align*} \mathbf{X'}= \left[ \begin{array}{cc} 2 \amp -1 \\ 4 \amp -2 \end{array} \right] \mathbf{X} \end{align*}
    (Notice that \(\mathbf{X}\) is a \(2 \times 2\) matrix, not a vector.)

Matrix exponentials and homogeneous systems with constant coefficients.

Recall that the differential equation
\begin{gather} \frac{dx}{dt}=ax\tag{✠} \end{gather}
has solution
\begin{equation*} x(t)=ce^{at}\text{.} \end{equation*}
Although (✠) is a single differential equation, we can think of it as a homogeneous matrix system of equations with 1 equation and 1 variable yielding
\begin{gather*} \frac{d\mathbf{x}}{dt} = [a] \mathbf{x} \end{gather*}
For larger homogeneous matrix systems
\begin{gather} \mathbf{x'}=\mathbf{A}\mathbf{x}\tag{‑} \end{gather}
with \(\mathbf{A}\) an \(n\times n\) matrix of constants, it would be great and consistent if we could write the solutions of (‑) as
\begin{gather*} \mathbf{x}(t)=e^{ \mathbf{A}t } \mathbf{c} =e^{ \mathbf{A}t } \mathbf{x_0} \end{gather*}
If we are going to achieve this, we will need an appropriate definition for \(e^{ \mathbf{A}t }\text{.}\)

Definition 183.

Let \(\mathbf{A}\) be an \(n\times n\) matrix of constants. We define \(e^{\mathbf{A}}\) and \(e^{\mathbf{A}t}\) by
\begin{gather*} e^{\mathbf{A}} \defn \sum_{n=0}^{\infty} \frac{\mathbf{A}^n}{n!} \end{gather*}
and
\begin{gather*} e^{\mathbf{A}t} \defn \sum_{n=0}^{\infty} \frac{\mathbf{A}^n t^n}{n!} \end{gather*}
NOTE: By definition \(\mathbf{A}^0=\mathbf{I}\text{.}\)
The following properties of the matrix exponential can be found in Chapter 9, Theorem 7 of the Fundamentals of Differential Equations and Boundary Value Problems by Nagle, Saff, and Snider.

Example 186. Solving a system with a nilpotent matrix.

Let’s return to the system we studied in ExampleΒ 182.
\begin{align} \mathbf{x'} = \mathbf{A}\mathbf{x} = \left[ \begin{array}{cc} 2 & -1 \\ 4 & -2 \end{array} \right] \mathbf{x}\tag{✢✢} \end{align}
  1. Find \(\mathbf{A}^0, \mathbf{A}^1, \mathbf{A}^2, \mathbf{A}^3, \dots\text{.}\)
  2. Find \(e^{\mathbf{A}t}\text{.}\)
  3. Verify that the columns of \(e^{\mathbf{A}t}\) are linearly independent solutions of (✢✢).
  4. Find a general solution of (✢✢).
  5. Find a solution of (✢✢) that satisfies the initial condition \(\mathbf{x}(0)=\left[ \begin{array}{c} 4 \\ 10 \end{array} \right]\text{.}\)
\begin{align*} \mathbf{x'} = \left[ \begin{array}{cc} 2 & -1 \\ 4 & -2 \end{array} \right] \mathbf{x}\\ e^{{\mathbf A}t} = \left[ \begin{array}{cc} 1+2t & -t \\ 4t & 1-2t \end{array} \right] \end{align*}
We were able to complete the last example because \(\mathbf{A}^k=\mathbf{0}\) for \(k=2, 3, \dots\text{.}\) When positive integer powers of a matrix \(\mathbf{A}\) eventually result in the zero matrix, we call the matrix \(\mathbf{A}\) a nilpotent matrix.

Definition 187.

An \(n \times n\) matrix is nilpotent if there is a positive integer \(k\) such that \(\mathbf{A}^k=\mathbf{0}\text{.}\)
Calculating \(e^{\mathbf{A}t}\) from the definition is difficult in general because we need to calculate an infinite series of matrices. If the matrix is nilpotent, the the definition becomes a finite sum of matrices. There is one other case in which computation of \(e^{\mathbf{A}t}\) is straightforward: the case when \(\mathbf{A}\) is a diagonal matrix.
Example 4 in section 5.6 of your textbook by Edwards, et.al. shows you how to compute \(e^{\mathbf{A}t}\) when \(\mathbf{A}\) is the sum of a nilpotent matrix and a diagonal matrix. Unless \(\mathbf{A}\) falls into one of these very special categories, it will be virtually impossible for us to compute \(e^{\mathbf{A}t}\) from the definition. The next theorem gives us a much faster way to compute \(e^{\mathbf{A}t}\) if we know a fundamental matrix for \(\mathbf{x'}=\mathbf{A}\mathbf{x}\text{.}\)
Proof:

Example 190. Using a fundamental matrix to calculate a matrix exponential.

Let
\begin{align*} A= \left[ \begin{array}{cc} 5 & -4 \\ 2 & -1 \end{array} \right] \end{align*}
It can be shown (and you should verify this at home), that \(\mathbf{A}\) has eigenvector \(\left[\begin{array}{c} 1 \\ 1 \end{array}\right]\) corresponding to eigenvalue 1 and eigenvector \(\left[\begin{array}{c} 2 \\ 1 \end{array}\right]\) corresponding to eigenvalue 3. Use this information to find \(e^{\mathbf{A}t}\) and find a solution that satisfies the initial condition \(\mathbf{x}(0)=\left[ \begin{array}{c} 3 \\ 4 \end{array} \right]\text{.}\)

Final Remarks.

We made some observations during this lesson, but we did not state them explicitly. We summarize some of our findings below.