The following are the same: *Nullspace of A *solutions space of AX=0 *the orthogonal complement of the row space of A. Rank is the number of leading 1's in rref(A). Nullity is the number of columns without leading 1 in rref(A). To test whether a vector v is in the span of the vectors w1,...,wk: use the w1,...,wk as columns of a matrix A, and see if AX=v is consistent. To test whether vectors w1,...,wk span the vector space V: make a matrix A with columns the w1,...,wk. Then get rref(A). If each row has a leading 1, yes. Otherwise, no. The inverse of A is found by placing A and the identity matrix next to each other, B=(A,I). The inverse is the right part of rref(B). The following are equivalent for square n by n matrix A: *A is invertible *det(A) is not 0 *A has rank n *the rows of A are linearly independent *the cols of A are linearly independent *AX=b has precisely one solution for all b *the nullity of A is 0 If a problem is asking for the basis/dimension of the nullspace of A, write out solutions for AX=0, separate the variables. The basis is the vectors you come up with. If a problem is asking for the basis/dimension of the column space of A, compute rref(A) and pick the columns of A corresponding to the columns in rref(A) that have a leading 1. If a problem is asking for the basis/dimension of the row space of A, compute rref(A transpose), pick the rows in A (=columns in A transpose) corresponding to columns in rref(A transpose) with leading 1's. If a problem is asking for the basis/dimension of the space W spanned by a bunch of given vectors, put them into a matrix as column, and find the basis/dimension for the column space as indicated above. If a problem gives you W as the collection of all vectors of "the following type where a,b,... are any real numbers", take what is given, separate variables, and take the resulting vectors as generators of W, as above. If W is given as the collection of "all vectors satisfying the following conditions in a,b,..." then write out solutions for the linear system that these conditions form to get a basis as in the first point up on top. If a problem is asking for an orthonormal basis of the subspace W spanned by u1,...,uk, you may a) use u1,...,uk in this order as input for Gram-Schmidt, OR b) change the order in which the vectors are given, OR c) first compute a basis for W and then use this basis as input. This helps a lot if the given vectors were linearly dependent. To find the point on the subspace W spanned by the linearly independent vectors w1,...,wk that is closest to v, make a matrix A with columns the w1,...,wk. Then solve A^T*A*X=A^T*v. If w1,...,wk are dependent, first find a basis for W and proceed as outlined. The vector perpendicular to the plane ax+by+cz=0 is (a,b,c). Why? (a,b,c) is in the rowspan, which is perpendicular to the solution space. The orthogonal complement of a subspace W sitting in a vector space V consists of all vectors in V that are perpendicular to all vectors in the subspace W. The dimension of the complement plus the dimension of the given subspace is the dimension of the big vector space, dim(W)+dim(W perp) = dim(V). Testing a transformation to be linear: if square terms, cubic ones, sin(x) or other crazy functions show up it is not linear. If nonzero constants without a variable attached to them are around, it is not linear. The least square problem AX=b is solved by solving A^TAX=A^Tb. If you look for a best line fit for points (x1,y1),...,(xn,yn) then A has 2 columns, the first is the numbers x1,...,xn. The second coilumn is 1,...,1. The vector b on the right is y1,...,yn. Counting inversions of a permutation: for each i count the number of things bigger than i that are to the left of i. Total them all up. If n>3 then there are no good ways to remember the determinant of an n by n matrix. It has n! terms. Scaling a row scales the determinant by that number. Switching 2 rows multiplies det(A) by -1. Adding k times a row to another row leaves the determinant unchanged. The determinant of 1x1, 2x2 and 3x3 matrices you should know by heart. 4x4 can be computed by doing the first part of rref, that is, make the matrix into an upper triangle. Then the determinant is just the product of the diagonal entries. The i-j cofactor is the determinant of the matrix you get from cancelling in A the ith row and the jth column. Cofactor expansion is also called Lagrange expansion sometimes. The inverse of A can be expressed as the matrix whose i-j-entry is the j-i-cofactor (be aware of the "wrong" order of the indices.) An excellent way of computing determinants is by performing some row reduction and then using cofactors (because you just cleaned out all but one element in some row or column). This works very well for 4x4's. But if the matrix already has any zeros showing, then use them right away with expansion. Note that det(kA) is det(A) times k^n where n is the number of rows of A. The area of a triangle with corners (x1,y1), (x2,y2), (x3,y3) is the determinant of (1 1 1) (x1 x2 x3) (y1 y2 y3) (In class I gave you the transpose of this, I think. but of course these have the same determinant!) The area of a shape changes by a factor of det(A) under the linear transformation given by v --> A*v. The adjoint of A has i-j-entry equal to (-1)^(i+j) times the determinant of the matrix obtained by deleting the j-th row and i-th column. (Note the transposition that switches rows and columns.) det(adj(A))=[det(A)]^(n-1) A*adj(A)=the diagonal matrix where all diagonal entries are det(A) Cramers rule: the i-the component of the solution to AX=b is det(A_i)/det(A) where A_i is the matrix in which b is substituted for the i-th column. lambda is an eigenvalue of A if det(lambda*I-A) is zero (and hence lambda I - A has a nonzero nullspace) every solution to (lambda I-A)*X=0 is an eigenvector of A (to lambda) A is diagonalizable if one can find P, and diagonal D, with A=PDP^{-1}. D has the eigenvalues of A on the diagonal, P has columns equal to eigenvectors of A (aligned like D) A is diagonalizable if each eigenvector has an eigenspace of dimension equal to the number of times the eigenvalue is root of the characteristic polynomial det(lambda I-A). [ that is, geometric and algebraic multiplicity are the same] if A is symmetric one can diagonalize A. in fact, one can make the P orthogonal, which means that P is the inverse of the transpose P^T. to get A=PDP^{-1} for symmetric A and orthogonal P, first find any P and D, and then use Gram-Schmidt on each eigenspace _separately_ to make the columns of P orhtogonal, and finally scale them to length 1. to divide complex numbers, recall that (a+bi)/(c+di)=(a+bi)(c-di)/(c^2+d^2) A is hermitean if A^T is the conjugate of A. this generalizes symmetric. unitary means A^T is the inverse of the conjugate of A. this generalizes orthogonal. normal means A^T and the conjugate of A commute. hermitean matrices can be diagonalized with the same methods as symmetric ones. to solve X'=AX, find P, D with A=PDP^{-1}, then the general solution is b1*P1*exp(lambda_1*t) + ... +bn*Pn*exp(lambda_n*t) where lambda_i is the i=th eigenvalue of A and P1,...,Pn are the corresponding eigenvectors.