PAGE 1

Review of Linear Algebra (continued)

Eigenvalues and Eigenvectors

Matrix \( A \) (\( n \times n \)) has eigenvalues (\( \lambda \)) and eigenvectors (\( \vec{x} \)) such that:

\[ A\vec{x} = \lambda\vec{x} \]

Eigenvectors preserve their directions.

Coordinate axes showing vector x and its scaled version lambda x maintaining the same direction.

\( \lambda \) scales the length but retains direction.

Example

For example, \( A = \begin{bmatrix} 5 & 0 \\ 2 & 1 \end{bmatrix} \)

One eigenvector is \( \begin{bmatrix} 2 \\ 1 \end{bmatrix} \)

\[ \underbrace{\begin{bmatrix} 5 & 0 \\ 2 & 1 \end{bmatrix}}_{A} \underbrace{\begin{bmatrix} 2 \\ 1 \end{bmatrix}}_{\vec{x}} = \begin{bmatrix} 10 \\ 5 \end{bmatrix} = 5 \begin{bmatrix} 2 \\ 1 \end{bmatrix} \]

lengthened by factor of 5 (eigenvalue)

PAGE 2

To find the eigenvalues and eigenvectors, we work with:

\[ \begin{aligned} A\vec{x} &= \lambda\vec{x} \\ A\vec{x} - \lambda\vec{x} &= \vec{0} \\ (A - \lambda I)\vec{x} &= \vec{0} \end{aligned} \]

Homogeneous system: \( \vec{x} = \vec{0} \) is a solution but we don't want that.

Must have infinitely-many solutions to have non-trivial solutions.

\[ \det(A - \lambda I) = 0 \]

\( \rightarrow \) solve for \( \lambda \): \( n \) of these for an \( n \times n \) matrix \( A \)

Then solve for \( \vec{x} \) using those \( \lambda \)'s:

\[ (A - \lambda I)\vec{x} = \vec{0} \]
PAGE 3

Eigenvalues and Eigenvectors

\[ A = \begin{bmatrix} 7 & 4 \\ -3 & -1 \end{bmatrix} \]

To find the eigenvalues, we solve the characteristic equation:

\[ \det(A - \lambda I) = \begin{vmatrix} 7 - \lambda & 4 \\ -3 & -1 - \lambda \end{vmatrix} = 0 \]
\[ (7 - \lambda)(-1 - \lambda) + 12 = 0 \]
\[ \lambda^2 - 6\lambda + 5 = 0 \quad \text{characteristic equation} \]
\[ (\lambda - 5)(\lambda - 1) = 0 \]
\[ \lambda = 1, \quad \lambda = 5 \]

Find the corresponding eigenvectors

For \( \lambda = 1 \):

\[ (A - \lambda I)\vec{x} = \vec{0} \]
\[ \left[ \begin{array}{cc|c} 6 & 4 & 0 \\ -3 & -2 & 0 \end{array} \right] \rightarrow \dots \rightarrow \left[ \begin{array}{cc|c} -3 & -2 & 0 \\ 0 & 0 & 0 \end{array} \right] \]
PAGE 4
\[ \vec{x} = \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} \]

From the reduced matrix, the 2nd row of \( (A - \lambda I)\vec{x} = \vec{0} \) implies that \( x_2 \) is a free variable:

\[ x_2 = r \]

From the 1st row:

\[ -3x_1 - 2x_2 = 0 \]
\[ x_1 = -\frac{2}{3}x_2 = -\frac{2}{3}r \]
\[ \vec{x} = \begin{bmatrix} -2/3 r \\ r \end{bmatrix} \quad \text{choose any nonzero } r \]

Let \( r = -3 \):

\[ \vec{x} = \begin{bmatrix} 2 \\ -3 \end{bmatrix}, \quad \lambda = 1 \]

Following the same steps for \( \lambda = 5 \), we get:

\[ \vec{x} = \begin{bmatrix} -2 \\ 1 \end{bmatrix}, \quad \lambda = 5 \]
PAGE 5

Finding Eigenvalues of a 3x3 Matrix

Given the matrix:

\[ A = \begin{bmatrix} 4 & 0 & 1 \\ -2 & 1 & 0 \\ -2 & 0 & 1 \end{bmatrix} \]

Characteristic Equation

To find the eigenvalues, we solve \( \det(A - \lambda I) = 0 \):

\[ \begin{vmatrix} 4 - \lambda & 0 & 1 \\ -2 & 1 - \lambda & 0 \\ -2 & 0 & 1 - \lambda \end{vmatrix} = 0 \]

Cofactor expansion (here, along col 2):

\[ (1 - \lambda) \begin{vmatrix} 4 - \lambda & 1 \\ -2 & 1 - \lambda \end{vmatrix} = 0 \]
\[ (1 - \lambda) \left[ (4 - \lambda)(1 - \lambda) + 2 \right] = 0 \]
\[ (1 - \lambda) (\lambda^2 - 5\lambda + 6) = 0 \]
\[ (1 - \lambda) (\lambda - 2) (\lambda - 3) = 0 \]
\[ \lambda = 1, 2, 3 \]
PAGE 6

Finding Eigenvectors

Case 1: \( \lambda = 1 \)

Solve \( (A - \lambda I) \vec{x} = \vec{0} \):

\[ \left[ \begin{array}{ccc|c} 3 & 0 & 1 & 0 \\ -2 & 0 & 0 & 0 \\ -2 & 0 & 0 & 0 \end{array} \right] \rightarrow \left[ \begin{array}{ccc|c} 3 & 0 & 1 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{array} \right] \rightarrow \left[ \begin{array}{ccc|c} 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 \end{array} \right] \]
\[ x_2 = \text{free var} \]\[ x_1 = 0 \]\[ x_3 = 0 \]
\[ \vec{x} = \begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix} \quad \lambda = 1 \]

Remaining Eigenvectors

Following the same process, we get the other eigenvectors:

\[ \vec{x} = \begin{bmatrix} -1 \\ 2 \\ 2 \end{bmatrix} \quad \lambda = 2 \]
\[ \vec{x} = \begin{bmatrix} -1 \\ 1 \\ 1 \end{bmatrix} \quad \lambda = 3 \]

Eigenvectors are linearly independent from one another (good basis vectors)

PAGE 7

7.4 Basic Theory of Sys. of 1st-order Linear Eqs.

\[ \begin{aligned} x_1' &= p_{11}(t)x_1 + p_{12}(t)x_2 + \dots + p_{1n}(t)x_n + g_1(t) \\ x_2' &= p_{21}(t)x_1 + p_{22}(t)x_2 + \dots + p_{2n}(t)x_n + g_2(t) \\ &\vdots \\ x_n' &= p_{n1}(t)x_1 + p_{n2}(t)x_2 + \dots + p_{nn}(t)x_n + g_n(t) \end{aligned} \]

in matrix form

\[ \vec{x}' = P(t)\vec{x} + \vec{g}(t) \]

Where the components are defined as:

\[ \vec{x}' = \begin{bmatrix} x_1' \\ \vdots \\ x_n' \end{bmatrix}, \quad P(t) = \begin{bmatrix} p_{11} & p_{12} & \dots & p_{1n} \\ \vdots & \ddots & & \vdots \\ p_{n1} & & & p_{nn} \end{bmatrix}, \quad \vec{g}(t) = \begin{bmatrix} g_1 \\ g_2 \\ \vdots \\ g_n \end{bmatrix} \]

if \( \vec{g} = \vec{0} \), \( \vec{x}' = P(t)\vec{x} \) is a homogeneous system.

it has solutions

\[ \vec{x}^{(1)} = \begin{bmatrix} x_{11} \\ x_{21} \\ \vdots \\ x_{n1} \end{bmatrix}, \quad \vec{x}^{(2)} = \begin{bmatrix} x_{12} \\ x_{22} \\ \vdots \\ x_{n2} \end{bmatrix}, \dots, \vec{x}^{(k)} = \begin{bmatrix} x_{1k} \\ \vdots \\ x_{nk} \end{bmatrix} \]

\( k \) of these

k = n

PAGE 8

these solutions form a fundamental set of solutions

the general solution is a linear combination of the vectors in that set

\[ \vec{x} = c_1\vec{x}^{(1)} + c_2\vec{x}^{(2)} + \dots + c_n\vec{x}^{(n)} \]

the Wronskian of the solutions is

\[ W[\vec{x}^{(1)}, \vec{x}^{(2)}, \dots, \vec{x}^{(n)}] = \left| \vec{x}^{(1)} \quad \vec{x}^{(2)} \quad \dots \quad \vec{x}^{(n)} \right| \]

fundamental solutions as columns

\( W(t_0) \neq 0 \rightarrow \) solutions are linearly independent at that \( t = t_0 \)

if we have initial conditions \( x_1(t_0) = x_{10}, \, x_2(t_0) = x_{20}, \dots \)

then there is one and only one way to form

\[ \vec{x} = c_1\vec{x}^{(1)} + c_2\vec{x}^{(2)} + \dots + c_n\vec{x}^{(n)} \]
PAGE 9

Abel's Theorem

If \( \vec{x}^{(1)}, \vec{x}^{(2)}, \dots, \vec{x}^{(n)} \) are solutions on some interval of \( t \), then on that interval the Wronskian is either always zero or never zero.

→ only need to check ONE \( t \) on an interval to ensure nonzero \( W \) throughout that interval

True because related to the fact that \( W' + pW = 0 \)

\[ \text{solution is } W = c e^{\int p(t) dt} \]
\[ \vec{x}' = \begin{bmatrix} 1 & 1 \\ 4 & -2 \end{bmatrix} \vec{x} \text{ has solutions } \begin{cases} \vec{x}^{(1)} = \begin{bmatrix} 1 \\ -4 \end{bmatrix} e^{-3t} \\ \vec{x}^{(2)} = \begin{bmatrix} 1 \\ 1 \end{bmatrix} e^{2t} \end{cases} \text{ satisfy } \vec{x}' = \begin{bmatrix} 1 & 1 \\ 4 & -2 \end{bmatrix} \vec{x} \]
\[ W = \begin{vmatrix} e^{-3t} & e^{2t} \\ -4e^{-3t} & e^{2t} \end{vmatrix} = 5e^{-t} \neq 0 \text{ for any } t \]