PAGE 1

5.3 (1) Diagonalization

exam 2 covers up to this lesson

If \( A = \begin{bmatrix} a & b \\ c & d \end{bmatrix} \), in general \( A^k \neq \begin{bmatrix} a^k & b^k \\ c^k & d^k \end{bmatrix} \)

but if \( A = \begin{bmatrix} a & 0 \\ 0 & d \end{bmatrix} \), then \( A^k = \begin{bmatrix} a^k & 0 \\ 0 & d^k \end{bmatrix} \)

why?

\[ A = \begin{bmatrix} a & 0 \\ 0 & d \end{bmatrix} \]\[ A^3 = \begin{bmatrix} a & 0 \\ 0 & d \end{bmatrix} \begin{bmatrix} a & 0 \\ 0 & d \end{bmatrix} \begin{bmatrix} a & 0 \\ 0 & d \end{bmatrix} \]\[ = \begin{bmatrix} a^2 & 0 \\ 0 & d^2 \end{bmatrix} \begin{bmatrix} a & 0 \\ 0 & d \end{bmatrix} = \begin{bmatrix} a^3 & 0 \\ 0 & d^3 \end{bmatrix} \]

If \( A \) is a square matrix, then it is diagonalizable if it is similar to a diagonal matrix \( D \). This means there exists matrix \( P \) such that

\[ A = PDP^{-1} \]

what are \( D \) and \( P \)?

PAGE 2
\[ A = PDP^{-1} \]\[ AP = PDP^{-1}P \implies AP = PD \]

Note: \( P^{-1}P = I \)

let \( P = [ \vec{v}_1 \quad \vec{v}_2 ] \) and \( D = \begin{bmatrix} d_1 & 0 \\ 0 & d_2 \end{bmatrix} \)

\( AP = PD \) is then \( A [ \vec{v}_1 \quad \vec{v}_2 ] = [ \vec{v}_1 \quad \vec{v}_2 ] \begin{bmatrix} d_1 & 0 \\ 0 & d_2 \end{bmatrix} \)

\[ [ A\vec{v}_1 \quad A\vec{v}_2 ] = [ d_1\vec{v}_1 \quad d_2\vec{v}_2 ] \]

this means \( A\vec{v}_1 = d_1\vec{v}_1 \) and \( A\vec{v}_2 = d_2\vec{v}_2 \)

  • \( \vec{v}_1 \) is an eigenvector of \( A \) w/ the corresponding eigenvalue \( d_1 \)
  • \( \vec{v}_2 \) is an eigenvector of \( A \) w/ the corresponding eigenvalue \( d_2 \)

( \( 3 \times 3 \to 3 \) eigenvalue/vector pairs )

\( n \times n \to n \) eigenvalue/vector pairs

PAGE 3

Diagonalization of Matrices

  • P = matrix w/ eigenvectors as columns
  • D = matrix w/ corresponding eigenvalues on the main diagonal

If eigenvalues are distinct, then the matrix is always diagonalizable, but might be so if eigenvalues are repeated.

Example

\[ A = \begin{bmatrix} 1 & 0 \\ 2 & 3 \end{bmatrix} \]

Triangular, so eigenvalues are on the main diagonal:

\[ \lambda = 1, \quad \lambda = 3 \]

Find eigenvector for \( \lambda = 1 \):

\[ (A - \lambda I) \vec{x} = \vec{0} \implies \begin{bmatrix} 0 & 0 & : & 0 \\ 2 & 2 & : & 0 \end{bmatrix} \sim \begin{bmatrix} 1 & 1 & : & 0 \\ 0 & 0 & : & 0 \end{bmatrix} \]

\( x_2 \) is free, \( x_1 = -x_2 \)

\[ \vec{x} = x_2 \begin{bmatrix} -1 \\ 1 \end{bmatrix} \quad \text{eigenvector: } \begin{bmatrix} -1 \\ 1 \end{bmatrix} \text{ for } \lambda = 1 \]
PAGE 4

Repeat for \( \lambda = 3 \):

\[ (A - \lambda I) \vec{x} = \vec{0} \implies \begin{bmatrix} -2 & 0 & : & 0 \\ 2 & 0 & : & 0 \end{bmatrix} \sim \begin{bmatrix} 1 & 0 & : & 0 \\ 0 & 0 & : & 0 \end{bmatrix} \]

\( x_2 \) is free, \( x_1 = 0 \)

\[ \vec{x} = x_2 \begin{bmatrix} 0 \\ 1 \end{bmatrix} \quad \text{eigenvector: } \begin{bmatrix} 0 \\ 1 \end{bmatrix} \text{ for } \lambda = 3 \]

Constructing P and D

\[ P = \begin{bmatrix} -1 & 0 \\ 1 & 1 \end{bmatrix} \quad D = \begin{bmatrix} 1 & 0 \\ 0 & 3 \end{bmatrix} \quad P^{-1} = \begin{bmatrix} -1 & 0 \\ 1 & 1 \end{bmatrix} \]

(or \( P = \begin{bmatrix} 0 & -1 \\ 1 & 1 \end{bmatrix} \quad D = \begin{bmatrix} 3 & 0 \\ 0 & 1 \end{bmatrix} \))

Verification: \( A = PDP^{-1} \)

\[ \begin{bmatrix} 1 & 0 \\ 2 & 3 \end{bmatrix} = \begin{bmatrix} -1 & 0 \\ 1 & 1 \end{bmatrix} \begin{bmatrix} 1 & 0 \\ 0 & 3 \end{bmatrix} \begin{bmatrix} -1 & 0 \\ 1 & 1 \end{bmatrix} \]
PAGE 5

Matrix Diagonalization and Powers

If \( A = PDP^{-1} \),

then \( A^k = (PDP^{-1})^k \)

\[ A^k = (PDP^{-1})(PDP^{-1})(PDP^{-1}) \cdots (PDP^{-1}) \quad \text{k times} \]

Notice that the internal products \( P^{-1}P \) simplify to the identity matrix \( I \):

\[ A^k = PD^kP^{-1} \]

Example Calculation

Given \( A = \begin{bmatrix} 1 & 0 \\ 2 & 3 \end{bmatrix} = \begin{bmatrix} -1 & 0 \\ 1 & 1 \end{bmatrix} \begin{bmatrix} 1 & 0 \\ 0 & 3 \end{bmatrix} \begin{bmatrix} -1 & 0 \\ 1 & 1 \end{bmatrix} \)

To find \( A^4 \):

\[ A^4 = \begin{bmatrix} -1 & 0 \\ 1 & 1 \end{bmatrix} \begin{bmatrix} 1^4 & 0 \\ 0 & 3^4 \end{bmatrix} \begin{bmatrix} -1 & 0 \\ 1 & 1 \end{bmatrix} \] \[ = \begin{bmatrix} -1 & 0 \\ 1 & 1 \end{bmatrix} \begin{bmatrix} 1 & 0 \\ 0 & 81 \end{bmatrix} \begin{bmatrix} -1 & 0 \\ 1 & 1 \end{bmatrix} \] \[ = \begin{bmatrix} -1 & 0 \\ 1 & 81 \end{bmatrix} \begin{bmatrix} -1 & 0 \\ 1 & 1 \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ 80 & 81 \end{bmatrix} \]
PAGE 6

Repeated Eigenvalues

What if eigenvalues are repeated?

Ideal case:

dimension of eigenspace = algebraic multiplicity

(# of eigenvectors found the normal way = # of times the eigenvalue appears)

Bad case:

dimension of eigenspace < algebraic multiplicity

\(\Rightarrow\) matrix is NOT diagonalizable

Example

\( A = \begin{bmatrix} 5 & 2 & 2 \\ 2 & 5 & 2 \\ 2 & 2 & 5 \end{bmatrix} \quad \lambda = 3, 3, 9 \)

Find eigenvectors for \( \lambda = 3 \): (algebraic multiplicity = 2)

\[ (A - \lambda I)\vec{x} = \vec{0} \] \[ \begin{bmatrix} 2 & 2 & 2 & 0 \\ 2 & 2 & 2 & 0 \\ 2 & 2 & 2 & 0 \end{bmatrix} \sim \begin{bmatrix} 1 & 1 & 1 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix} \]

\( x_2, x_3 \) are free variables. \( x_1 = -x_2 - x_3 \)

\[ \vec{x} = x_2 \begin{bmatrix} -1 \\ 1 \\ 0 \end{bmatrix} + x_3 \begin{bmatrix} -1 \\ 0 \\ 1 \end{bmatrix} \]

Eigenvectors:

\( \begin{bmatrix} -1 \\ 1 \\ 0 \end{bmatrix}, \begin{bmatrix} -1 \\ 0 \\ 1 \end{bmatrix} \)

geometric multiplicity = 2

dim eigenspace = 2

PAGE 7

Eigenvalue Calculation for \(\lambda = 9\)

Repeat the process for the eigenvalue \(\lambda = 9\). We set up the augmented matrix \((A - 9I|0)\):

\[\begin{bmatrix} -4 & 2 & 2 & 0 \\ 2 & -4 & 2 & 0 \\ 2 & 2 & -4 & 0 \end{bmatrix} \sim \dots \sim \begin{bmatrix} 1 & 0 & -1 & 0 \\ 0 & 1 & -1 & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix}\]

From the row-reduced echelon form, we find the corresponding eigenvector:

\(\text{eigenvector } \begin{bmatrix} 1 \\ 1 \\ 1 \end{bmatrix}\)

Diagonalization Matrices

Constructing the matrix of eigenvectors \(P\) and the diagonal matrix of eigenvalues \(D\):

\[P = \begin{bmatrix} -1 & -1 & 1 \\ 1 & 0 & 1 \\ 0 & 1 & 1 \end{bmatrix}, \quad D = \begin{bmatrix} 3 & 0 & 0 \\ 0 & 3 & 0 \\ 0 & 0 & 9 \end{bmatrix}\]

The inverse of matrix \(P\) is calculated as:

\[P^{-1} = \begin{bmatrix} -1/3 & 2/3 & -1/3 \\ -1/3 & -1/3 & 2/3 \\ 1/3 & 1/3 & 1/3 \end{bmatrix}\]

Matrix Decomposition

The original matrix \(A\) can be expressed through the diagonalization formula \(A = PDP^{-1}\):

\[A = \begin{bmatrix} 5 & 2 & 2 \\ 2 & 5 & 2 \\ 2 & 2 & 5 \end{bmatrix} = \underbrace{\begin{bmatrix} -1 & -1 & 1 \\ 1 & 0 & 1 \\ 0 & 1 & 1 \end{bmatrix}}_{P} \underbrace{\begin{bmatrix} 3 & 0 & 0 \\ 0 & 3 & 0 \\ 0 & 0 & 9 \end{bmatrix}}_{D} \underbrace{\begin{bmatrix} -1/3 & 2/3 & -1/3 \\ -1/3 & -1/3 & 2/3 \\ 1/3 & 1/3 & 1/3 \end{bmatrix}}_{P^{-1}}\]