Catalog / Linear Algebra Cheatsheet
Linear Algebra Cheatsheet
A concise reference for key concepts, formulas, and operations in linear algebra. This cheat sheet covers vectors, matrices, linear transformations, and more, providing a quick guide for students, engineers, and researchers.
Vectors and Spaces
Basic Vector Operations
Vector Addition |
\mathbf{u} + mathbf{v} = (u_1 + v_1, u_2 + v_2, ..., u_n + v_n) |
Scalar Multiplication |
cmathbf{u} = (cu_1, cu_2, ..., cu_n) |
Dot Product |
\mathbf{u} cdot mathbf{v} = u_1v_1 + u_2v_2 + ... + u_nv_n |
Vector Norm (Magnitude) |
\lVert mathbf{u} Vert = sqrt{u_1^2 + u_2^2 + ... + u_n^2} |
Cross Product (3D) |
\mathbf{u} imes mathbf{v} = (u_2v_3 - u_3v_2, u_3v_1 - u_1v_3, u_1v_2 - u_2v_1) |
Unit Vector |
\hat{mathbf{u}} = \frac{mathbf{u}}{\lVert mathbf{u} Vert} |
Vector Spaces
Vector Space Axioms:
|
Subspaces
Definition |
A subset W of a vector space V is a subspace if it is itself a vector space under the same operations defined on V. |
Conditions for a Subspace |
To prove W is a subspace of V, show:
|
Examples |
|
Matrices
Basic Matrix Operations
Matrix Addition |
(A + B)_{ij} = A_{ij} + B_{ij} (element-wise addition) |
Scalar Multiplication |
(cA)_{ij} = c(A_{ij}) (multiply each element by the scalar) |
Matrix Multiplication |
(AB)_{ij} = \sum_{k=1}^{n} A_{ik}B_{kj} (row i of A times column j of B) |
Transpose |
(A^T)_{ij} = A_{ji} (swap rows and columns) |
Trace |
\text{tr}(A) = \sum_{i=1}^{n} A_{ii} (sum of diagonal elements) |
Determinant |
\det(A) (a scalar value that can be computed recursively or by row reduction) |
Inverse |
A^{-1} (a matrix such that AA^{-1} = A^{-1}A = I, where I is the identity matrix) |
Special Matrices
|
Matrix Properties
Associativity |
(AB)C = A(BC) |
Distributivity |
A(B + C) = AB + AC |
Scalar Multiplication |
c(AB) = (cA)B = A(cB) |
Transpose Properties |
(A + B)^T = A^T + B^T |
Inverse Properties |
(A^{-1})^{-1} = A |
Determinant Properties |
\det(AB) = \det(A)\det(B) |
Linear Transformations
Definition and Properties
Definition |
A linear transformation T: V \to W is a function between vector spaces V and W that preserves vector addition and scalar multiplication. |
Properties |
|
Zero Vector |
T(\mathbf{0}_V) = \mathbf{0}_W, where \mathbf{0}_V and \mathbf{0}_W are the zero vectors in V and W, respectively. |
Linear Combination |
T(c_1\mathbf{v}_1 + c_2\mathbf{v}_2 + ... + c_n\mathbf{v}_n) = c_1T(\mathbf{v}_1) + c_2T(\mathbf{v}_2) + ... + c_nT(\mathbf{v}_n) |
Kernel and Image
Kernel (Null Space) |
\text{ker}(T) = {\mathbf{v} \in V : T(\mathbf{v}) = \mathbf{0}_W}. The kernel is a subspace of V. |
Image (Range) |
\text{im}(T) = {T(\mathbf{v}) : \mathbf{v} \in V}. The image is a subspace of W. |
Rank-Nullity Theorem |
\dim(\text{ker}(T)) + \dim(\text{im}(T)) = \dim(V) |
Matrix Representation
Given a linear transformation T: V \to W, and bases B = {\mathbf{v}_1, ..., \mathbf{v}_n} for V and C = {\mathbf{w}_1, ..., \mathbf{w}_m} for W, the matrix representation of T with respect to B and C is the m \times n matrix A such that: [T(\mathbf{v})]_C = A[\mathbf{v}]_B where [\mathbf{v}]_B and [T(\mathbf{v})]_C are the coordinate vectors of \mathbf{v} and T(\mathbf{v}) with respect to the bases B and C, respectively. |
The columns of A are the coordinate vectors of T(\mathbf{v}_i) with respect to the basis C, i.e., A = [[T(\mathbf{v}_1)]_C \ \ [T(\mathbf{v}_2)]_C \ \ ... \ \ [T(\mathbf{v}_n)]_C] |
Eigenvalues and Eigenvectors
Definitions
Eigenvalue |
A scalar \lambda is an eigenvalue of a square matrix A if there exists a non-zero vector \mathbf{v} such that A\mathbf{v} = \lambda\mathbf{v}. |
Eigenvector |
A non-zero vector \mathbf{v} is an eigenvector of a square matrix A corresponding to the eigenvalue \lambda if A\mathbf{v} = \lambda\mathbf{v}. |
Eigenspace |
The eigenspace of A corresponding to the eigenvalue \lambda is the set of all eigenvectors corresponding to \lambda, together with the zero vector. It is a subspace of \mathbb{R}^n and is denoted by E_\lambda = {\mathbf{v} : A\mathbf{v} = \lambda\mathbf{v}}. |
Finding Eigenvalues and Eigenvectors
To find the eigenvalues of a matrix A, solve the characteristic equation: \det(A - \lambda I) = 0 where I is the identity matrix and \lambda is the eigenvalue. |
Once the eigenvalues are found, the corresponding eigenvectors can be found by solving the equation: (A - \lambda I)\mathbf{v} = \mathbf{0} for each eigenvalue \lambda. |
Diagonalization
Diagonalizable Matrix |
A square matrix A is diagonalizable if there exists an invertible matrix P and a diagonal matrix D such that A = PDP^{-1}. The columns of P are the eigenvectors of A, and the diagonal entries of D are the corresponding eigenvalues. |
Conditions for Diagonalization |
An n \times n matrix A is diagonalizable if and only if it has n linearly independent eigenvectors. |
Procedure |
|