Catalog / Linear Algebra Cheatsheet

Linear Algebra Cheatsheet

A concise reference for key concepts, formulas, and operations in linear algebra. This cheat sheet covers vectors, matrices, linear transformations, and more, providing a quick guide for students, engineers, and researchers.

Vectors and Spaces

Basic Vector Operations

Vector Addition

\mathbf{u} + mathbf{v} = (u_1 + v_1, u_2 + v_2, ..., u_n + v_n)

Scalar Multiplication

cmathbf{u} = (cu_1, cu_2, ..., cu_n)

Dot Product

\mathbf{u} cdot mathbf{v} = u_1v_1 + u_2v_2 + ... + u_nv_n

Vector Norm (Magnitude)

\lVert mathbf{u} Vert = sqrt{u_1^2 + u_2^2 + ... + u_n^2}

Cross Product (3D)

\mathbf{u} imes mathbf{v} = (u_2v_3 - u_3v_2, u_3v_1 - u_1v_3, u_1v_2 - u_2v_1)

Unit Vector

\hat{mathbf{u}} = \frac{mathbf{u}}{\lVert mathbf{u} Vert}

Vector Spaces

Vector Space Axioms:
A set V is a vector space over a field F if it satisfies the following axioms for all \mathbf{u}, \mathbf{v}, \mathbf{w} \in V and c, d \in F:

  1. \mathbf{u} + \mathbf{v} \in V (Closure under addition)
  2. \mathbf{u} + \mathbf{v} = \mathbf{v} + \mathbf{u} (Commutativity of addition)
  3. (\mathbf{u} + \mathbf{v}) + \mathbf{w} = \mathbf{u} + (\mathbf{v} + \mathbf{w}) (Associativity of addition)
  4. There exists a zero vector \mathbf{0} \in V such that \mathbf{u} + \mathbf{0} = \mathbf{u} (Existence of additive identity)
  5. For each \mathbf{u} \in V, there exists -\mathbf{u} \in V such that \mathbf{u} + (-\mathbf{u}) = \mathbf{0} (Existence of additive inverse)
  6. c\mathbf{u} \in V (Closure under scalar multiplication)
  7. c(\mathbf{u} + \mathbf{v}) = c\mathbf{u} + c\mathbf{v} (Distributivity of scalar multiplication over vector addition)
  8. (c + d)\mathbf{u} = c\mathbf{u} + d\mathbf{u} (Distributivity of scalar multiplication over field addition)
  9. c(d\mathbf{u}) = (cd)\mathbf{u} (Compatibility of scalar multiplication with field multiplication)
  10. 1\mathbf{u} = \mathbf{u} (Identity element of scalar multiplication)

Subspaces

Definition

A subset W of a vector space V is a subspace if it is itself a vector space under the same operations defined on V.

Conditions for a Subspace

To prove W is a subspace of V, show:

  1. W is non-empty (i.e., \mathbf{0} \in W).
  2. W is closed under addition: If \mathbf{u}, \mathbf{v} \in W, then \mathbf{u} + \mathbf{v} \in W.
  3. W is closed under scalar multiplication: If \mathbf{u} \in W and c is a scalar, then c\mathbf{u} \in W.

Examples

  • The set containing only the zero vector, {\mathbf{0}}, is a subspace.
  • The entire vector space V is a subspace of itself.
  • A line through the origin in \mathbb{R}^2 is a subspace of \mathbb{R}^2.

Matrices

Basic Matrix Operations

Matrix Addition

(A + B)_{ij} = A_{ij} + B_{ij} (element-wise addition)

Scalar Multiplication

(cA)_{ij} = c(A_{ij}) (multiply each element by the scalar)

Matrix Multiplication

(AB)_{ij} = \sum_{k=1}^{n} A_{ik}B_{kj} (row i of A times column j of B)

Transpose

(A^T)_{ij} = A_{ji} (swap rows and columns)

Trace

\text{tr}(A) = \sum_{i=1}^{n} A_{ii} (sum of diagonal elements)

Determinant

\det(A) (a scalar value that can be computed recursively or by row reduction)

Inverse

A^{-1} (a matrix such that AA^{-1} = A^{-1}A = I, where I is the identity matrix)

Special Matrices

  • Identity Matrix (I): A square matrix with 1s on the main diagonal and 0s elsewhere.
  • Zero Matrix: A matrix with all elements equal to 0.
  • Diagonal Matrix: A square matrix with non-zero elements only on the main diagonal.
  • Symmetric Matrix: A square matrix A such that A = A^T.
  • Skew-Symmetric Matrix: A square matrix A such that A = -A^T.
  • Orthogonal Matrix: A square matrix Q such that Q^TQ = QQ^T = I.

Matrix Properties

Associativity

(AB)C = A(BC)

Distributivity

A(B + C) = AB + AC
(A + B)C = AC + BC

Scalar Multiplication

c(AB) = (cA)B = A(cB)

Transpose Properties

(A + B)^T = A^T + B^T
(cA)^T = cA^T
(AB)^T = B^TA^T

Inverse Properties

(A^{-1})^{-1} = A
(AB)^{-1} = B^{-1}A^{-1}

Determinant Properties

\det(AB) = \det(A)\det(B)
\det(A^T) = \det(A)
\det(A^{-1}) = \frac{1}{\det(A)}

Linear Transformations

Definition and Properties

Definition

A linear transformation T: V \to W is a function between vector spaces V and W that preserves vector addition and scalar multiplication.

Properties

  1. T(\mathbf{u} + \mathbf{v}) = T(\mathbf{u}) + T(\mathbf{v}) for all \mathbf{u}, \mathbf{v} \in V.
  2. T(c\mathbf{u}) = cT(\mathbf{u}) for all \mathbf{u} \in V and scalar c.

Zero Vector

T(\mathbf{0}_V) = \mathbf{0}_W, where \mathbf{0}_V and \mathbf{0}_W are the zero vectors in V and W, respectively.

Linear Combination

T(c_1\mathbf{v}_1 + c_2\mathbf{v}_2 + ... + c_n\mathbf{v}_n) = c_1T(\mathbf{v}_1) + c_2T(\mathbf{v}_2) + ... + c_nT(\mathbf{v}_n)

Kernel and Image

Kernel (Null Space)

\text{ker}(T) = {\mathbf{v} \in V : T(\mathbf{v}) = \mathbf{0}_W}. The kernel is a subspace of V.

Image (Range)

\text{im}(T) = {T(\mathbf{v}) : \mathbf{v} \in V}. The image is a subspace of W.

Rank-Nullity Theorem

\dim(\text{ker}(T)) + \dim(\text{im}(T)) = \dim(V)

Matrix Representation

Given a linear transformation T: V \to W, and bases B = {\mathbf{v}_1, ..., \mathbf{v}_n} for V and C = {\mathbf{w}_1, ..., \mathbf{w}_m} for W, the matrix representation of T with respect to B and C is the m \times n matrix A such that:

[T(\mathbf{v})]_C = A[\mathbf{v}]_B

where [\mathbf{v}]_B and [T(\mathbf{v})]_C are the coordinate vectors of \mathbf{v} and T(\mathbf{v}) with respect to the bases B and C, respectively.

The columns of A are the coordinate vectors of T(\mathbf{v}_i) with respect to the basis C, i.e., A = [[T(\mathbf{v}_1)]_C \ \ [T(\mathbf{v}_2)]_C \ \ ... \ \ [T(\mathbf{v}_n)]_C]

Eigenvalues and Eigenvectors

Definitions

Eigenvalue

A scalar \lambda is an eigenvalue of a square matrix A if there exists a non-zero vector \mathbf{v} such that A\mathbf{v} = \lambda\mathbf{v}.

Eigenvector

A non-zero vector \mathbf{v} is an eigenvector of a square matrix A corresponding to the eigenvalue \lambda if A\mathbf{v} = \lambda\mathbf{v}.

Eigenspace

The eigenspace of A corresponding to the eigenvalue \lambda is the set of all eigenvectors corresponding to \lambda, together with the zero vector. It is a subspace of \mathbb{R}^n and is denoted by E_\lambda = {\mathbf{v} : A\mathbf{v} = \lambda\mathbf{v}}.

Finding Eigenvalues and Eigenvectors

To find the eigenvalues of a matrix A, solve the characteristic equation:

\det(A - \lambda I) = 0

where I is the identity matrix and \lambda is the eigenvalue.

Once the eigenvalues are found, the corresponding eigenvectors can be found by solving the equation:

(A - \lambda I)\mathbf{v} = \mathbf{0}

for each eigenvalue \lambda.

Diagonalization

Diagonalizable Matrix

A square matrix A is diagonalizable if there exists an invertible matrix P and a diagonal matrix D such that A = PDP^{-1}. The columns of P are the eigenvectors of A, and the diagonal entries of D are the corresponding eigenvalues.

Conditions for Diagonalization

An n \times n matrix A is diagonalizable if and only if it has n linearly independent eigenvectors.

Procedure

  1. Find n linearly independent eigenvectors \mathbf{v}_1, ..., \mathbf{v}_n of A.
  2. Form the matrix P whose columns are these eigenvectors: P = [\mathbf{v}_1 \ \mathbf{v}_2 \ \ ... \ \mathbf{v}_n].
  3. Form the diagonal matrix D with the corresponding eigenvalues on the diagonal: D = \text{diag}(\lambda_1, \lambda_2, ..., \lambda_n).
  4. Then A = PDP^{-1}.