(I) Review of Matrix Notation and Linear Systems

An \( m \times n \) matrix \( A \) is a rectangular array of numbers with \( m \) rows and \( n \) columns:

\[ A = [a_{ij}]_{m \times n} = \begin{bmatrix} a_{11} & a_{12} & \dots & a_{1n} \\ a_{21} & a_{22} & \dots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{m1} & a_{m2} & \dots & a_{mn} \end{bmatrix} \]

Matrix Operations

Matrix Multiplication

For \( A_{m \times p} \) and \( B_{p \times n} \), the product \( C = AB \) is an \( m \times n \) matrix where each entry \( c_{ij} \) is the scalar product of the \( i^{th} \) row of \( A \) and the \( j^{th} \) column of \( B \):

\[ c_{ij} = \sum_{k=1}^{p} a_{ik}b_{kj} \]
CRITICAL RULE: The product \( AB \) is defined IF AND ONLY IF the number of columns of \( A \) matches the number of rows of \( B \).

Properties: Multiplication is generally not commutative (\( AB \neq BA \)). Furthermore, \( AB = \mathbf{0} \) does not necessarily imply \( A = \mathbf{0} \) or \( B = \mathbf{0} \).

Inverses and Determinants

The Identity Matrix \( I_{n \times n} \) has 1s on the diagonal and 0s elsewhere. A square matrix \( A \) is nonsingular (invertible) if there exists \( A^{-1} \) such that \( AA^{-1} = I \). This exists if and only if the determinant \( |A| \neq 0 \).


(II) Matrix-Valued Functions

A matrix-valued function \( A(t) = [a_{ij}(t)] \) and vector-valued function \( \vec{x}(t) = [x_i(t)] \) are differentiated component-wise:

\[ A'(t) = \frac{dA}{dt} = [a_{ij}'(t)] \]

Rules: \( (A+B)' = A' + B' \) and the product rule \( (AB)' = AB' + A'B \).

Example 1

Given \( A(t) = \begin{bmatrix} e^t & t \\ 5t & -1 \end{bmatrix} \) and \( \vec{b}(t) = \begin{bmatrix} 3 \\ 2e^{-t} \end{bmatrix} \), verify the product rule \( (A\vec{b})' = A\vec{b}' + A'\vec{b} \).

Verification: This is left as an exercise for the student to confirm entry-wise equality.


(III) First-Order Linear Systems

A first-order linear system of \( n \) differential equations can be written as:

\( x_1' = p_{11}(t)x_1 + p_{12}(t)x_2 + \dots + p_{1n}(t)x_n + f_1(t) \)
\( x_2' = p_{21}(t)x_1 + p_{22}(t)x_2 + \dots + p_{2n}(t)x_n + f_2(t) \)
\( \vdots \)
\( x_n' = p_{n1}(t)x_1 + p_{n2}(t)x_2 + \dots + p_{nn}(t)x_n + f_n(t) \)

Using vector-matrix notation, we define the following column vectors and matrix:

\( \vec{x} = \begin{bmatrix} x_1 \\ \vdots \\ x_n \end{bmatrix}, \quad \mathbf{P}(t) = \begin{bmatrix} p_{11}(t) & \dots & p_{1n}(t) \\ \vdots & \ddots & \vdots \\ p_{n1}(t) & \dots & p_{nn}(t) \end{bmatrix}, \quad \vec{f}(t) = \begin{bmatrix} f_1(t) \\ \vdots \\ f_n(t) \end{bmatrix} \)

The nonhomogeneous system is written as:

\[ \frac{d\vec{x}}{dt} = \mathbf{P}(t)\vec{x} + \vec{f}(t) \]

The associated homogeneous equation is:

\[ \frac{d\vec{x}}{dt} = \mathbf{P}(t)\vec{x} \]

An Initial Value Problem (IVP) involves solving the system subject to the initial condition \( \vec{x}(a) = \vec{b} \), where \( \vec{b} = [b_1, \dots, b_n]^T \). This is equivalent to specifying \( n \) conditions: \( x_1(a)=b_1, x_2(a)=b_2, \dots, x_n(a)=b_n \).

Example 1

Write the following system as \( \frac{d\vec{x}}{dt} = \mathbf{P}(t)\vec{x} + \vec{f}(t) \):

\( x' = 3x - 4y + z + t \)
\( y' = x - 3z + t^2 \)
\( z' = 6y - 7z + t^3 \)

Solution: Let \( \vec{x} = [x, y, z]^T \). Then:

\[ \mathbf{P}(t) = \begin{bmatrix} 3 & -4 & 1 \\ 1 & 0 & -3 \\ 0 & 6 & -7 \end{bmatrix}, \quad \vec{f}(t) = \begin{bmatrix} t \\ t^2 \\ t^3 \end{bmatrix} \]

(IV) Solutions of Linear Systems

A solution of the system on an interval \( I \) is a column vector function \( \vec{x}(t) \) such that its component functions satisfy the system identically on \( I \).

Theorem 1: Principle of Superposition

If \( \vec{x}_1(t), \vec{x}_2(t), \dots, \vec{x}_k(t) \) are solutions of the homogeneous system \( \vec{x}' = \mathbf{P}(t)\vec{x} \), then any linear combination \( \vec{x} = C_1\vec{x}_1 + \dots + C_k\vec{x}_k \) is also a solution.

Linear Independence and Wronskians

Vector-valued functions \( \vec{x}_1, \dots, \vec{x}_n \) are linearly dependent on \( I \) if there exist constants \( C_1, \dots, C_n \), not all zero, such that \( C_1\vec{x}_1(t) + \dots + C_n\vec{x}_n(t) = \vec{0} \). If no such constants exist, they are linearly independent.

The Wronskian of \( n \) solutions is the determinant:

\[ W(t) = W(\vec{x}_1, \dots, \vec{x}_n) = \det([\vec{x}_1 \dots \vec{x}_n]) \]

Theorem 2: Wronskians of Solutions

Suppose \( \vec{x}_1, \dots, \vec{x}_n \) are solutions of the homogeneous system on \( I \), and \( \mathbf{P}(t) \) is continuous. Let \( W(t) \) be their Wronskian.

  • If the solutions are linearly dependent on \( I \), then \( W(t) = 0 \) at every point of \( I \).
  • If the solutions are linearly independent on \( I \), then \( W(t) \neq 0 \) at each point of \( I \).

Theorem 3: General Solutions of Homogeneous Systems

If \( \vec{x}_1, \dots, \vec{x}_n \) are linearly independent solutions of the homogeneous system on \( I \), then every solution \( \vec{x}(t) \) can be expressed as a general solution: \[ \vec{x}(t) = C_1\vec{x}_1(t) + \dots + C_n\vec{x}_n(t) \]

Theorem 4: Solution of Nonhomogeneous Systems

A general solution of the nonhomogeneous system is given by: \[ \vec{x}(t) = \vec{x}_c(t) + \vec{x}_p(t) \] where \( \vec{x}_c \) is the general solution of the associated homogeneous system and \( \vec{x}_p \) is any one particular solution of the nonhomogeneous system.

Example 2

Verify that \( \vec{x}_1 = e^{3t} \begin{bmatrix} 1 \\ -1 \end{bmatrix} \) and \( \vec{x}_2 = e^{2t} \begin{bmatrix} 1 \\ -2 \end{bmatrix} \) are linearly independent solutions of \( \vec{x}' = \begin{bmatrix} 4 & 1 \\ -2 & 1 \end{bmatrix} \vec{x} \). Solve the IVP for \( \vec{x}(0) = \begin{bmatrix} 11 \\ -7 \end{bmatrix} \).

Step 1: Verification.
For \( \vec{x}_1 \): \( \begin{bmatrix} 4 & 1 \\ -2 & 1 \end{bmatrix} \begin{bmatrix} 1 \\ -1 \end{bmatrix} e^{3t} = \begin{bmatrix} 3 \\ -3 \end{bmatrix} e^{3t} \). Since \( \vec{x}_1' = 3e^{3t} \begin{bmatrix} 1 \\ -1 \end{bmatrix} \), it matches.
For \( \vec{x}_2 \): \( \begin{bmatrix} 4 & 1 \\ -2 & 1 \end{bmatrix} \begin{bmatrix} 1 \\ -2 \end{bmatrix} e^{2t} = \begin{bmatrix} 2 \\ -4 \end{bmatrix} e^{2t} \). Since \( \vec{x}_2' = 2e^{2t} \begin{bmatrix} 1 \\ -2 \end{bmatrix} \), it matches.

Step 2: Linear Independence.
\( W(\vec{x}_1, \vec{x}_2) = e^{3t}e^{2t} \begin{vmatrix} 1 & 1 \\ -1 & -2 \end{vmatrix} = e^{5t}(-2 - (-1)) = -e^{5t} \neq 0 \).
General Solution: \( \vec{x}(t) = C_1 e^{3t} \begin{bmatrix} 1 \\ -1 \end{bmatrix} + C_2 e^{2t} \begin{bmatrix} 1 \\ -2 \end{bmatrix} \).

Step 3: Solve for constants.
At \( t = 0 \): \( C_1 \begin{bmatrix} 1 \\ -1 \end{bmatrix} + C_2 \begin{bmatrix} 1 \\ -2 \end{bmatrix} = \begin{bmatrix} 11 \\ -7 \end{bmatrix} \implies \begin{cases} C_1 + C_2 = 11 \\ -C_1 - 2C_2 = -7 \end{cases} \).
Solving yields \( C_2 = -4 \) nd \( C_1 = 15 \).
Particular Solution: \( \vec{x}(t) = 15e^{3t} \begin{bmatrix} 1 \\ -1 \end{bmatrix} - 4e^{2t} \begin{bmatrix} 1 \\ -2 \end{bmatrix} \).