PAGE 1

7.1-7.3 Systems of Diff. Eqs. + Review of Linear Alg.

mass-spring-damper: \( mu'' + \gamma u' + ku = f(t) \rightarrow \text{scalar eq. (2nd-order)} \)

another way to look at it: let \( x_1 = u \)

\[ x_2 = u' \]

notice \( x_1' = x_2 \) (because \( x_1 = u, x_2 = u' \))

\[ x_2' = -\frac{k}{m}x_1 - \frac{\gamma}{m}x_2 + f(t) \quad (\text{because } u'' = -\frac{k}{m}u - \frac{\gamma}{m}u' + f(t)) \]

that mass-spring-damper is now a system of two 1st-order eqs.

Another one: \( u^{(4)} - u = 0 \quad u(0)=4, u'(0)=3, u''(0)=2, u'''(0)=1 \)

define \( x_1 = u, x_2 = u', x_3 = u'', x_4 = u''' \)

\[ \begin{cases} x_1' = x_2 \\ x_2' = x_3 \\ x_3' = x_4 \\ x_4' = x_1 \end{cases} \quad \text{definition of } x_i \quad \begin{cases} x_1(0) = 4 \\ x_2(0) = 3 \\ x_3(0) = 2 \\ x_4(0) = 1 \end{cases} \]

\( \rightarrow \text{from } u^{(4)} - u = 0 \)

PAGE 2

an \( n^{\text{th}} \)-order diff. eq. \( \rightarrow n \) 1st-order eqs. in a system

\[ \begin{aligned} n^{\text{th}}\text{-order linear} \rightarrow x_1' &= p_{11}(t)x_1 + p_{12}(t)x_2 + \dots + p_{1n}(t)x_n + g_1(t) \\ x_2' &= p_{21}(t)x_1 + p_{22}(t)x_2 + \dots + p_{2n}(t)x_n + g_2(t) \\ &\vdots \\ x_n' &= p_{n1}(t)x_1 + p_{n2}(t)x_2 + \dots + p_{nn}(t)x_n + g_n(t) \end{aligned} \]

if all \( g_i(t) = 0 \rightarrow \) homogeneous system (else nonhomogeneous)

the system has a unique solution on some interval of \( t \) that contains initial condition where all \( p \) and \( g \) are continuous

back to:

\[ \begin{aligned} x_1' &= x_2 \\ x_2' &= -\frac{k}{m}x_1 - \frac{\gamma}{m}x_2 + f(t) \end{aligned} \]

can be put into matrix form

\[ \begin{bmatrix} x_1' \\ x_2' \end{bmatrix} = \begin{bmatrix} 0 & 1 \\ -\frac{k}{m} & -\frac{\gamma}{m} \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} + \begin{bmatrix} 0 \\ f(t) \end{bmatrix} \]
PAGE 3

that eq. is in the form of

\[ \vec{x}' = A(t) \vec{x} + \vec{g}(t) \]

Note: In the equation above, \( A(t) \) is a matrix, while \( \vec{x}' \), \( \vec{x} \), and \( \vec{g}(t) \) are vectors.

(looks a lot like 1st-order eq. \( y' = ay + g \))

in textbook, it uses \( ( \quad ) \) for matrix \( \begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix} \) but I like using \( [ \quad ] \)

we will need to review some linear algebra

matrix addition: \( A + B \) if they are the same size

\[ A = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix} \quad B = \begin{bmatrix} 5 & 6 \\ 7 & 8 \end{bmatrix} \]\[ A + B = \begin{bmatrix} 1+5 & 2+6 \\ 3+7 & 4+8 \end{bmatrix} = \begin{bmatrix} 6 & 8 \\ 10 & 12 \end{bmatrix} \]\[ A - B = \begin{bmatrix} -4 & -4 \\ -4 & -4 \end{bmatrix} \]
PAGE 4

scalar multiplication:

\[ 5 \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix} = \begin{bmatrix} 5 & 10 \\ 15 & 20 \end{bmatrix} \]

vector: matrix with one column or one row

\[ \vec{x} = \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} \quad \text{column vector} \]

print: boldface

I will use over arrow in notes

\( [ 1 \quad 2 \quad 3 \quad 4 ] \) is a row vector

matrix multiplication:

\( AB \) is possible only if # cols of A is the same as # rows of B

\[ \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix} \begin{bmatrix} 5 \\ 6 \end{bmatrix} = \begin{bmatrix} 1 \cdot 5 + 2 \cdot 6 \\ 3 \cdot 5 + 4 \cdot 6 \end{bmatrix} = \begin{bmatrix} 17 \\ 35 \end{bmatrix} \]

Note: The dimensions are \( 2 \times 2 \) (row \( \times \) col) and \( 2 \times 1 \) (row \( \times \) col), where the inner dimensions (2) match.

PAGE 5

Matrix Multiplication and Inverses

Matrix Multiplication Example

\[ \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix}_{2 \times 2} \begin{bmatrix} 5 & 1 & 2 \\ -1 & 0 & 3 \end{bmatrix}_{2 \times 3} = \begin{bmatrix} 3 & 1 & 8 \\ 11 & 3 & 18 \end{bmatrix}_{2 \times 3} \]

Note: The inner dimensions (2) match, and the result is a \( 2 \times 3 \) matrix.

Matrix Inverse

The inverse of a matrix \( A \), denoted as \( A^{-1} \), satisfies:

\[ AA^{-1} = I \] \[ A^{-1}A = I \]
Identity matrix \( I \) (ones in diagonal, same size as \( A \))

Calculating a \( 2 \times 2 \) Inverse

For a \( 2 \times 2 \) matrix \( A \):

\[ A = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix} \]
\[ A^{-1} = \frac{1}{\det(A)} \begin{bmatrix} 4 & -2 \\ -3 & 1 \end{bmatrix} = \frac{1}{\begin{vmatrix} 1 & 2 \\ 3 & 4 \end{vmatrix}} \begin{bmatrix} 4 & -2 \\ -3 & 1 \end{bmatrix} = \begin{bmatrix} -2 & 1 \\ 3/2 & -1/2 \end{bmatrix} \]
PAGE 6

Verification (Check)

Verify that \( A^{-1}A = I \):

\[ \begin{bmatrix} -2 & 1 \\ 3/2 & -1/2 \end{bmatrix} \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} \]

\( 3 \times 3 \) (and beyond)

To find the inverse of larger matrices, use an augmented matrix:

\[ [ A \mid I ] \]

\( \rightarrow \) Gaussian elimination

\[ [ I \mid A^{-1} ] \]

Example of the augmented structure:

\[ \left[ \begin{array}{ccc|ccc} 1 & 0 & 0 & 1 & 0 & 0 \\ 1 & 1 & 0 & 0 & 1 & 0 \\ 1 & 1 & 1 & 0 & 0 & 1 \end{array} \right] \]

The left side represents matrix \( A \) and the right side represents the identity matrix \( I \).

PAGE 7

Matrix Inversion and Linear Independence

Row Operations for Matrix Inversion

\[ \begin{matrix} (-1) \text{ row1} + \text{row2} \\ (-1) \text{ row2} + \text{row3} \end{matrix} \xrightarrow{} \left[ \begin{array}{ccc|ccc} 1 & 0 & 0 & 1 & 0 & 0 \\ 0 & 1 & 0 & -1 & 1 & 0 \\ 0 & 1 & 1 & -1 & 0 & 1 \end{array} \right] \]
\[ \xrightarrow{(-1)R_2 + R_3} \underbrace{\left[ \begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array} \right]}_{I} \underbrace{\left| \begin{array}{ccc} 1 & 0 & 0 \\ -1 & 1 & 0 \\ 0 & -1 & 1 \end{array} \right]}_{A^{-1}} \]

Linear Independence

Linear independence: vectors \( \vec{x}_i \) are linearly independent if

\[ \underbrace{c_1 \vec{x}_1 + c_2 \vec{x}_2 + \dots + c_n \vec{x}_n}_{\text{linear combination of } \vec{x}_i} = \underset{\uparrow \text{ zero vector}}{\vec{0}} \]

if and only if \( c_1 = c_2 = \dots = c_n = 0 \).

PAGE 8

Examples of Linear Independence

For example, \( \vec{x}_1 = \begin{bmatrix} 1 \\ 0 \end{bmatrix} \), \( \vec{x}_2 = \begin{bmatrix} 0 \\ 1 \end{bmatrix} \)

\[ c_1 \begin{bmatrix} 1 \\ 0 \end{bmatrix} + c_2 \begin{bmatrix} 0 \\ 1 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix} \text{ if } c_1 = c_2 = 0 \]

So, they are linearly independent (they can be used as basis vectors to span a certain vector space).

\( \vec{x}_1 = \begin{bmatrix} 1 \\ 0 \end{bmatrix} \), \( \vec{x}_2 = \begin{bmatrix} 1 \\ 1 \end{bmatrix} \) also linearly indp.

How about \( \begin{bmatrix} 1 \\ 1 \end{bmatrix} \) and \( \begin{bmatrix} 2 \\ 2 \end{bmatrix} \)? No.

Determinant Condition

If \( \vec{x}_i \) are linearly independent,

\[ | \vec{x}_1 \ \vec{x}_2 \ \dots \ \vec{x}_n | \neq 0 \quad (\text{connected to Wronskian}) \]

\( \vec{x}_i \) as columns

PAGE 9

If not linearly indp., \[ | \vec{x}_1 \vec{x}_2 \dots \vec{x}_n | = 0 \]

Solution of systems:

\[ A \vec{x} = \vec{0} \rightarrow \vec{x} = \vec{0} \text{ is always a solution (trivial solution)} \]

Unique if columns of \( A \) are linearly indp.

\[ |A| \neq 0 \]

Same if \[ A \vec{x} = \vec{b} \]

Next time: eigenvalues and eigenvectors