(I) Review of Matrix Notation and Linear Systems
An \( m \times n \) matrix \( A \) is a rectangular array of numbers with \( m \) rows and \( n \) columns:
Matrix Operations
-
Assume that \(A = [a_{ij}]_{m \times n}\) and \(B = [b_{ij}]_{m \times n}\)
- Addition and Subtraction: \( A \pm B = [a_{ij} \pm b_{ij}]_{m \times n} \).
- Scalar Multiplication: \( cA = [ca_{ij}]_{m \times n} \), where \( c \) is any number.
- Zero Matrix: Denoted by \( \mathbf{O} \), where \( \mathbf{O} + A = A \).
- Transpose: \( A^T = [a_{ji}]_{n \times m} \), where rows and columns are interchanged.
- Column Vector: Denoted by boldface lowercase, \( \vec{x} = [x_1 \dots x_m]^T \).
Matrix Multiplication
For \( A_{m \times p} \) and \( B_{p \times n} \), the product \( C = AB \) is an \( m \times n \) matrix where each entry \( c_{ij} \) is the scalar product of the \( i^{th} \) row of \( A \) and the \( j^{th} \) column of \( B \):
Properties: Multiplication is generally not commutative (\( AB \neq BA \)). Furthermore, \( AB = \mathbf{0} \) does not necessarily imply \( A = \mathbf{0} \) or \( B = \mathbf{0} \).
Inverses and Determinants
The Identity Matrix \( I_{n \times n} \) has 1s on the diagonal and 0s elsewhere. A square matrix \( A \) is nonsingular (invertible) if there exists \( A^{-1} \) such that \( AA^{-1} = I \). This exists if and only if the determinant \( |A| \neq 0 \).
- 2x2 Determinant: \( |\begin{smallmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{smallmatrix}| = a_{11}a_{22} - a_{12}a_{21} \).
- 2x2 Inverse: \( A^{-1} = \frac{1}{|A|} \begin{bmatrix} a_{22} & -a_{12} \\ -a_{21} & a_{11} \end{bmatrix} \).
- Cofactor Expansion Along the ith Row: For an \( n \times n \) matrix \(A = [a_{ij}]_{m \times n}\) , \( |A| = \sum_{j=1}^{n} (-1)^{i+j} a_{ij} |A_{ij}| =\sum_{j=1}^{n}a_{ij}C_{ij} \), where \( A_{ij} \) is the matrix obtained by deleting row \( i \) and column \( j \) from matrix A and the cofactor \(C_{ij}=(-1)^{i+j}|A_{ij}|\) .
(II) Matrix-Valued Functions
A matrix-valued function \( A(t) = [a_{ij}(t)] \) and vector-valued function \( \vec{x}(t) = [x_i(t)] \) are differentiated component-wise:
Rules: \( (A+B)' = A' + B' \) and the product rule \( (AB)' = AB' + A'B \).
Given \( A(t) = \begin{bmatrix} e^t & t \\ 5t & -1 \end{bmatrix} \) and \( \vec{b}(t) = \begin{bmatrix} 3 \\ 2e^{-t} \end{bmatrix} \), verify the product rule \( (A\vec{b})' = A\vec{b}' + A'\vec{b} \).
Verification: This is left as an exercise for the student to confirm entry-wise equality.
(III) First-Order Linear Systems
A first-order linear system of \( n \) differential equations can be written as:
\( x_2' = p_{21}(t)x_1 + p_{22}(t)x_2 + \dots + p_{2n}(t)x_n + f_2(t) \)
\( \vdots \)
\( x_n' = p_{n1}(t)x_1 + p_{n2}(t)x_2 + \dots + p_{nn}(t)x_n + f_n(t) \)
Using vector-matrix notation, we define the following column vectors and matrix:
The nonhomogeneous system is written as:
The associated homogeneous equation is:
An Initial Value Problem (IVP) involves solving the system subject to the initial condition \( \vec{x}(a) = \vec{b} \), where \( \vec{b} = [b_1, \dots, b_n]^T \). This is equivalent to specifying \( n \) conditions: \( x_1(a)=b_1, x_2(a)=b_2, \dots, x_n(a)=b_n \).
Write the following system as \( \frac{d\vec{x}}{dt} = \mathbf{P}(t)\vec{x} + \vec{f}(t) \):
\( x' = 3x - 4y + z + t \)\( y' = x - 3z + t^2 \)
\( z' = 6y - 7z + t^3 \)
Solution: Let \( \vec{x} = [x, y, z]^T \). Then:
(IV) Solutions of Linear Systems
A solution of the system on an interval \( I \) is a column vector function \( \vec{x}(t) \) such that its component functions satisfy the system identically on \( I \).
Theorem 1: Principle of Superposition
If \( \vec{x}_1(t), \vec{x}_2(t), \dots, \vec{x}_k(t) \) are solutions of the homogeneous system \( \vec{x}' = \mathbf{P}(t)\vec{x} \), then any linear combination \( \vec{x} = C_1\vec{x}_1 + \dots + C_k\vec{x}_k \) is also a solution.
Linear Independence and Wronskians
Vector-valued functions \( \vec{x}_1, \dots, \vec{x}_n \) are linearly dependent on \( I \) if there exist constants \( C_1, \dots, C_n \), not all zero, such that \( C_1\vec{x}_1(t) + \dots + C_n\vec{x}_n(t) = \vec{0} \). If no such constants exist, they are linearly independent.
The Wronskian of \( n \) solutions is the determinant:
Theorem 2: Wronskians of Solutions
Suppose \( \vec{x}_1, \dots, \vec{x}_n \) are solutions of the homogeneous system on \( I \), and \( \mathbf{P}(t) \) is continuous. Let \( W(t) \) be their Wronskian.
- If the solutions are linearly dependent on \( I \), then \( W(t) = 0 \) at every point of \( I \).
- If the solutions are linearly independent on \( I \), then \( W(t) \neq 0 \) at each point of \( I \).
Theorem 3: General Solutions of Homogeneous Systems
If \( \vec{x}_1, \dots, \vec{x}_n \) are linearly independent solutions of the homogeneous system on \( I \), then every solution \( \vec{x}(t) \) can be expressed as a general solution: \[ \vec{x}(t) = C_1\vec{x}_1(t) + \dots + C_n\vec{x}_n(t) \]
Theorem 4: Solution of Nonhomogeneous Systems
A general solution of the nonhomogeneous system is given by: \[ \vec{x}(t) = \vec{x}_c(t) + \vec{x}_p(t) \] where \( \vec{x}_c \) is the general solution of the associated homogeneous system and \( \vec{x}_p \) is any one particular solution of the nonhomogeneous system.
Verify that \( \vec{x}_1 = e^{3t} \begin{bmatrix} 1 \\ -1 \end{bmatrix} \) and \( \vec{x}_2 = e^{2t} \begin{bmatrix} 1 \\ -2 \end{bmatrix} \) are linearly independent solutions of \( \vec{x}' = \begin{bmatrix} 4 & 1 \\ -2 & 1 \end{bmatrix} \vec{x} \). Solve the IVP for \( \vec{x}(0) = \begin{bmatrix} 11 \\ -7 \end{bmatrix} \).
Step 1: Verification.
For \( \vec{x}_1 \): \( \begin{bmatrix} 4 & 1 \\ -2 & 1 \end{bmatrix} \begin{bmatrix} 1 \\ -1 \end{bmatrix} e^{3t} = \begin{bmatrix} 3 \\ -3 \end{bmatrix} e^{3t} \). Since \( \vec{x}_1' = 3e^{3t} \begin{bmatrix} 1 \\ -1 \end{bmatrix} \), it matches.
For \( \vec{x}_2 \): \( \begin{bmatrix} 4 & 1 \\ -2 & 1 \end{bmatrix} \begin{bmatrix} 1 \\ -2 \end{bmatrix} e^{2t} = \begin{bmatrix} 2 \\ -4 \end{bmatrix} e^{2t} \). Since \( \vec{x}_2' = 2e^{2t} \begin{bmatrix} 1 \\ -2 \end{bmatrix} \), it matches.
Step 2: Linear Independence.
\( W(\vec{x}_1, \vec{x}_2) = e^{3t}e^{2t} \begin{vmatrix} 1 & 1 \\ -1 & -2 \end{vmatrix} = e^{5t}(-2 - (-1)) = -e^{5t} \neq 0 \).
General Solution: \( \vec{x}(t) = C_1 e^{3t} \begin{bmatrix} 1 \\ -1 \end{bmatrix} + C_2 e^{2t} \begin{bmatrix} 1 \\ -2 \end{bmatrix} \).
Step 3: Solve for constants.
At \( t = 0 \): \( C_1 \begin{bmatrix} 1 \\ -1 \end{bmatrix} + C_2 \begin{bmatrix} 1 \\ -2 \end{bmatrix} = \begin{bmatrix} 11 \\ -7 \end{bmatrix} \implies \begin{cases} C_1 + C_2 = 11 \\ -C_1 - 2C_2 = -7 \end{cases} \).
Solving yields \( C_2 = -4 \) nd \( C_1 = 15 \).
Particular Solution: \( \vec{x}(t) = 15e^{3t} \begin{bmatrix} 1 \\ -1 \end{bmatrix} - 4e^{2t} \begin{bmatrix} 1 \\ -2 \end{bmatrix} \).