Before we jump into this lesson, just a quick reminder about the inverses of \(2 \times 2\) matrices. If \(\det(M) \neq 0\) and
\begin{align*}
M= \left[ \begin{array}{cc} a \amp b \\ c \amp d \end{array}
\right]
\end{align*}
is a \(2 \times 2\) matrix, then the inverse of \(M\text{,}\) denoted \(M^{-1}\text{,}\) is
\begin{align*}
M^{-1}=\frac{1}{\det(M)} \left[ \begin{array}{cc} d \amp -b
\\ -c \amp a \end{array} \right]
\end{align*}
In this lesson, we continue to explore solutions of homogeneous systems of linear differential equations with constant coefficients. In other words, we are trying to solve
\begin{gather}
\mathbf{x'}=\mathbf{A}\mathbf{x}\tag{βΆ}
\end{gather}
where \(\mathbf{A}\) is and \(n\times n\) matrix of constants and \(\mathbf{x}\) is a vector of functions of \(t\text{.}\)
Recall:
-
By the Remark in section 5.1 after Theorem 3 in your textbook by Edwards, et.al.,
(βΆ) has
\(n\) linearly independent solutions
\(\mathbf{x_1}(t),
\mathbf{x_2}(t), \dots, \mathbf{x_n}(t)\text{.}\)
-
Every solution
\(\mathbf{x}(t)\) of
(βΆ) is a linear combination of
\(\mathbf{x_1}(t), \mathbf{x_2}(t), \dots,
\mathbf{x_n}(t)\text{.}\)
Definition 181.
Let \(\mathbf{x_1}(t), \mathbf{x_2}(t), \dots,
\mathbf{x_n}(t)\) be \(n\) linearly independent solutions of
\begin{gather}
\mathbf{x'}=\mathbf{A}\mathbf{x}\tag{#}
\end{gather}
The matrix
\begin{align*}
\mathbf{\Phi}(t)= \left[ \begin{array}{cccc} | \amp | \amp
\amp | \\ \mathbf{x_1}(t) \amp \mathbf{x_2}(t) \amp \dots
\amp \mathbf{x_n}(t)
\\ | \amp | \amp \amp |
\end{array} \right]
\end{align*}
is called a
fundamental matrix for the system
(#).
Matrix exponentials and homogeneous systems with constant coefficients.
Recall that the differential equation
\begin{gather}
\frac{dx}{dt}=ax\tag{β }
\end{gather}
has solution
\begin{equation*}
x(t)=ce^{at}\text{.}
\end{equation*}
Although
(β ) is a single differential equation, we can think of it as a homogeneous
matrix system of equations with 1 equation and 1 variable yielding
\begin{gather*}
\frac{d\mathbf{x}}{dt} = [a] \mathbf{x}
\end{gather*}
For larger homogeneous matrix systems
\begin{gather}
\mathbf{x'}=\mathbf{A}\mathbf{x}\tag{β‘}
\end{gather}
with
\(\mathbf{A}\) an
\(n\times n\) matrix of constants, it would be great and consistent if we could write the solutions of
(β‘) as
\begin{gather*}
\mathbf{x}(t)=e^{ \mathbf{A}t } \mathbf{c}
=e^{ \mathbf{A}t } \mathbf{x_0}
\end{gather*}
If we are going to achieve this, we will need an appropriate definition for
\(e^{ \mathbf{A}t }\text{.}\)
Definition 183.
Let \(\mathbf{A}\) be an \(n\times n\) matrix of constants. We define \(e^{\mathbf{A}}\) and \(e^{\mathbf{A}t}\) by
\begin{gather*}
e^{\mathbf{A}} \defn \sum_{n=0}^{\infty}
\frac{\mathbf{A}^n}{n!}
\end{gather*}
and
\begin{gather*}
e^{\mathbf{A}t} \defn \sum_{n=0}^{\infty}
\frac{\mathbf{A}^n t^n}{n!}
\end{gather*}
NOTE: By definition \(\mathbf{A}^0=\mathbf{I}\text{.}\)
The following properties of the matrix exponential can be found in Chapter 9, Theorem 7 of the
Fundamentals of Differential Equations and Boundary Value Problems by Nagle, Saff, and Snider.
Theorem 184. Properties of the Matrix Exponential.
Let \(\mathbf{A}\) and \(\mathbf{B}\) be \(n \times n\) matrices containing constants. Let \(r\text{,}\) \(s\text{,}\) and \(t\) be real or complex numbers. Then
-
\(\displaystyle e^{\mathbf{A}\cdot 0} = e^{\mathbf{0}}=\mathbf{I}\)
-
\(\displaystyle e^{\mathbf{A}(r+s)} = e^{\mathbf{A}r}e^{\mathbf{A}s}\)
-
\((e^{\mathbf{At}})^{-1} = e^{-\mathbf{A}t}\) (In particular, this means that the columns of \(e^{\mathbf{A}t}\) are .)
-
If \(\mathbf{A}\mathbf{B}=\mathbf{B}\mathbf{A}\text{,}\) then \(e^{(\mathbf{A}+\mathbf{B})t}=e^{\mathbf{A}t}e^{\mathbf{B}t}\text{.}\)
-
\(\displaystyle e^{r\mathbf{I}t}=e^{rt}I\)
-
\(\displaystyle \frac{d}{dt}[e^{\mathbf{A}t}] = \mathbf{A}e^{\mathbf{A}t}\)
Corollary 185.
\(\mathbf{X}=e^{\mathbf{A}t}\) is a solution of
\begin{gather*}
\mathbf{X'}=\mathbf{A}\mathbf{X}
\end{gather*}
\begin{align*}
\mathbf{x'} = \left[ \begin{array}{cc}
2 & -1 \\ 4 & -2 \end{array} \right] \mathbf{x}\\
e^{{\mathbf A}t} = \left[ \begin{array}{cc}
1+2t & -t \\ 4t & 1-2t \end{array} \right]
\end{align*}
We were able to complete the last example because
\(\mathbf{A}^k=\mathbf{0}\) for
\(k=2, 3, \dots\text{.}\) When positive integer powers of a matrix
\(\mathbf{A}\) eventually result in the zero matrix, we call the matrix
\(\mathbf{A}\) a
nilpotent matrix.
Definition 187.
An
\(n \times n\) matrix is
nilpotent if there is a positive integer
\(k\) such that
\(\mathbf{A}^k=\mathbf{0}\text{.}\)
Calculating
\(e^{\mathbf{A}t}\) from the definition is difficult in general because we need to calculate an infinite series of matrices. If the matrix is nilpotent, the the definition becomes a finite sum of matrices. There is one other case in which computation of
\(e^{\mathbf{A}t}\) is straightforward: the case when
\(\mathbf{A}\) is a diagonal matrix.
Theorem 188.
If \(\mathbf{A}\) is a diagonal matrix,
\begin{align*}
\mathbf{A} = \left[ \begin{array}{ccccc} d_1 & 0 & 0
& \dots & 0 \\ 0 & d_2 & 0 & \dots & 0
\\ \vdots & \vdots & \vdots & \ddots & 0 \\ 0
& 0 & 0 & \dots & d_n \end{array} \right]
\end{align*}
then \(e^{\mathbf{A}t}\) is also diagonal and is given by
\begin{align*}
e^{\mathbf{A}t} = \left[ \begin{array}{ccccc} e^{d_1 t}
& 0 & 0 & \dots & 0 \\ 0 & e^{d_2 t} & 0
& \dots & 0 \\ \vdots & \vdots & \vdots &
\ddots & 0 \\ 0 & 0 & 0 & \dots & e^{d_n t}
\end{array} \right]
\end{align*}
Example 4 in section 5.6 of your textbook by Edwards, et.al. shows you how to compute
\(e^{\mathbf{A}t}\) when
\(\mathbf{A}\) is the sum of a nilpotent matrix and a diagonal matrix. Unless
\(\mathbf{A}\) falls into one of these very special categories, it will be virtually impossible for us to compute
\(e^{\mathbf{A}t}\) from the definition. The next theorem gives us a much faster way to compute
\(e^{\mathbf{A}t}\) if we know a fundamental matrix for
\(\mathbf{x'}=\mathbf{A}\mathbf{x}\text{.}\)
We made some observations during this lesson, but we did not state them explicitly. We summarize some of our findings below.
Theorem 191.
The matrix exponential
\(e^{\mathbf{A}t}\) is a fundamental matrix for the linear system
\(\mathbf{x}'=\mathbf{Ax}\text{.}\)
Theorem 192.
If \(\mathbf{A}\) is a square matrix, then the solution of the initial value problem
\begin{gather*}
\mathbf{x}'=\mathbf{Ax}, \qquad
\mathbf{x}(0)=\mathbf{x}_0
\end{gather*}
is
\begin{gather*}
\mathbf{x}(t)=e^{\mathbf{A}t}\mathbf{x}_0.
\end{gather*}