Linear Algebra Formulas
Vectors and Vector Spaces
Vector Definition: \[ \mathbf{v} = \begin{pmatrix} v_1 \ v_2 \ \vdots \ v_n \end{pmatrix} \]
Vector Addition: \[ \mathbf{u} + \mathbf{v} = \begin{pmatrix} u_1 \ u_2 \ \vdots \ u_n \end{pmatrix} + \begin{pmatrix} v_1 \ v_2 \ \vdots \ v_n \end{pmatrix} = \begin{pmatrix} u_1 + v_1 \ u_2 + v_2 \ \vdots \ u_n + v_n \end{pmatrix} \]
Scalar Multiplication: \[ c \mathbf{v} = c \begin{pmatrix} v_1 \ v_2 \ \vdots \ v_n \end{pmatrix} = \begin{pmatrix} c v_1 \ c v_2 \ \vdots \ c v_n \end{pmatrix} \]
Dot Product: \[ \mathbf{u} \cdot \mathbf{v} = u_1 v_1 + u_2 v_2 + \cdots + u_n v_n \] \[ \mathbf{u} \cdot \mathbf{v} = |\mathbf{u}| |\mathbf{v}| \cos \theta \]
Cross Product (in ( \mathbb{R}^3 )): \[ \mathbf{u} \times \mathbf{v} = \begin{vmatrix} \mathbf{i} & \mathbf{j} & \mathbf{k} \\ u_1 & u_2 & u_3 \\ v_1 & v_2 & v_3 \end{vmatrix} \] \[ |\mathbf{u} \times \mathbf{v}| = |\mathbf{u}| |\mathbf{v}| \sin \theta \]
Vector Space Axioms:
- Closure under addition
- Closure under scalar multiplication
- Associativity of addition
- Commutativity of addition
- Existence of additive identity
- Existence of additive inverse
- Distributivity of scalar multiplication over vector addition
- Distributivity of scalar multiplication over field addition
- Compatibility of scalar multiplication with field multiplication
- Existence of multiplicative identity
Subspace: A subset (W) of a vector space (V) that is itself a vector space under the same operations.
Matrices
Matrix Definition: \[ A = \begin{pmatrix} a_{11} & a_{12} & \cdots & a_{1n} \ a_{21} & a_{22} & \cdots & a_{2n} \ \vdots & \vdots & \ddots & \vdots \ a_{m1} & a_{m2} & \cdots & a_{mn} \end{pmatrix} \]
Matrix Addition: \[ A + B = \begin{pmatrix} a_{11} + b_{11} & a_{12} + b_{12} & \cdots & a_{1n} + b_{1n} \ a_{21} + b_{21} & a_{22} + b_{22} & \cdots & a_{2n} + b_{2n} \ \vdots & \vdots & \ddots & \vdots \ a_{m1} + b_{m1} & a_{m2} + b_{m2} & \cdots & a_{mn} + b_{mn} \end{pmatrix} \]
Scalar Multiplication: \[ cA = \begin{pmatrix} c a_{11} & c a_{12} & \cdots & c a_{1n} \ c a_{21} & c a_{22} & \cdots & c a_{2n} \ \vdots & \vdots & \ddots & \vdots \ c a_{m1} & c a_{m2} & \cdots & c a_{mn} \end{pmatrix} \]
Matrix Multiplication: \[ AB = \begin{pmatrix} a_{11} & a_{12} & \cdots & a_{1n} \ a_{21} & a_{22} & \cdots & a_{2n} \ \vdots & \vdots & \ddots & \vdots \ a_{m1} & a_{m2} & \cdots & a_{mn} \end{pmatrix} \begin{pmatrix} b_{11} & b_{12} & \cdots & b_{1p} \ b_{21} & b_{22} & \cdots & b_{2p} \ \vdots & \vdots & \ddots & \vdots \ b_{n1} & b_{n2} & \cdots & b_{np} \end{pmatrix} \]
Transpose: \[ A^T = \begin{pmatrix} a_{11} & a_{21} & \cdots & a_{m1} \ a_{12} & a_{22} & \cdots & a_{m2} \ \vdots & \vdots & \ddots & \vdots \ a_{1n} & a_{2n} & \cdots & a_{mn} \end{pmatrix} \]
Determinant (for a 2x2 matrix): \[ \det(A) = \begin{vmatrix} a & b \ c & d \end{vmatrix} = ad - bc \]
Inverse (for a 2x2 matrix): \[ A^{-1} = \frac{1}{\det(A)} \begin{pmatrix} d & -b \ -c & a \end{pmatrix} \]
Properties of Determinants:
- ( \det(A^T) = \det(A) )
- ( \det(AB) = \det(A) \det(B) )
- ( \det(A^{-1}) = \frac{1}{\det(A)} )
Systems of Linear Equations
General Form: \[ A \mathbf{x} = \mathbf{b} \]
Augmented Matrix: \[ \left( A | \mathbf{b} \right) \]
Gaussian Elimination:
- Form the augmented matrix.
- Use row operations to reach row-echelon form.
- Perform back-substitution to find the solution.
Row Operations:
- Swap two rows.
- Multiply a row by a non-zero scalar.
- Add or subtract a multiple of one row to/from another row.
Eigenvalues and Eigenvectors
Eigenvalue Equation: \[ A \mathbf{v} = \lambda \mathbf{v} \]
Characteristic Polynomial: \[ \det(A - \lambda I) = 0 \]
Solving for Eigenvalues: Find the roots of the characteristic polynomial.
Solving for Eigenvectors: Solve \( (A - \lambda I) \mathbf{v} = 0 \) for each eigenvalue \( \lambda \).
Diagonalization
Diagonalization of a Matrix: A matrix ( A ) is diagonalizable if there exists a matrix ( P ) such that \[ P^{-1} A P = D \] where ( D ) is a diagonal matrix.
Steps to Diagonalize a Matrix:
- Find the eigenvalues of ( A ).
- Find the eigenvectors of ( A ).
- Form the matrix ( P ) using the eigenvectors as columns.
- Form the diagonal matrix ( D ) using the eigenvalues.
Inner Product and Orthogonality
Inner Product (Dot Product): \[ \langle \mathbf{u}, \mathbf{v} \rangle = \mathbf{u} \cdot \mathbf{v} = u_1 v_1 + u_2 v_2 + \cdots + u_n v_n \]
Norm (Length) of a Vector: \[ |\mathbf{v}| = \sqrt{\langle \mathbf{v}, \mathbf{v} \rangle} \]
Orthogonality: Two vectors ( \mathbf{u} ) and ( \mathbf{v} ) are orthogonal if \[ \langle \mathbf{u}, \mathbf{v} \rangle = 0 \]
Orthogonal Projection: \[ \text{proj}_{\mathbf{u}} \mathbf{v} = \frac{\langle \mathbf{v}, \mathbf{u} \rangle}{\langle \mathbf{u}, \mathbf{u} \rangle} \mathbf{u} \]
Gram-Schmidt Process: To orthogonalize a set of vectors ( {\mathbf{v}_1, \mathbf{v}_2, \ldots, \mathbf{v}_n} ):
- Set ( \mathbf{u}_1 = \mathbf{v}_1 ).
- For ( k = 2 ) to ( n ): \[ \mathbf{u}k = \mathbf{v}k - \sum{j=1}^{k-1} \text{proj}{\mathbf{u}_j} \mathbf{v}_k \]
- Normalize ( \mathbf{u}_k ) to obtain orthonormal vectors ( \mathbf{e}_k = \frac{\mathbf{u}_k}{|\mathbf{u}_k|} ).
Linear Transformations
Definition: A linear transformation ( T ) from ( \mathbb{R}^n ) to ( \mathbb{R}^m ) is a function that satisfies:
- ( T(\mathbf{u} + \mathbf{v}) = T(\mathbf{u}) + T(\mathbf{v}) )
- ( T(c \mathbf{u}) = c T(\mathbf{u}) )
Matrix Representation: If ( T(\mathbf{x}) = A \mathbf{x} ), then ( A ) is the matrix representation of ( T ).
Kernel and Image:
- Kernel: ( \ker(T) = {\mathbf{x} \in \mathbb{R}^n : T(\mathbf{x}) = \mathbf{0}} )
- Image: ( \text{Im}(T) = {T(\mathbf{x}) : \mathbf{x} \in \mathbb{R}^n} )
Rank-Nullity Theorem: \[ \text{dim}(\ker(T)) + \text{dim}(\text{Im}(T)) = n \]
Determinants and Inverses
Determinant (for an ( n \times n ) matrix): \[ \det(A) = \sum_{j=1}^n (-1)^{i+j} a_{ij} \det(A_{ij}) \] where ( A_{ij} ) is the ((n-1) \times (n-1)) matrix obtained by deleting the (i)-th row and (j)-th column from ( A ).
Properties of Inverses:
- ( (A^{-1})^{-1} = A )
- ( (AB)^{-1} = B^{-1}A^{-1} )
- ( (A^T)^{-1} = (A^{-1})^T \]
Cramer's Rule: For a system ( A \mathbf{x} = \mathbf{b} ) with ( n ) equations and ( n ) unknowns: \[ x_i = \frac{\det(A_i)}{\det(A)} \] where ( A_i ) is the matrix obtained by replacing the (i)-th column of ( A ) with ( \mathbf{b} ).
Special Matrices
Identity Matrix: \[ I = \begin{pmatrix} 1 & 0 & \cdots & 0 \ 0 & 1 & \cdots & 0 \ \vdots & \vdots & \ddots & \vdots \ 0 & 0 & \cdots & 1 \end{pmatrix} \]
Diagonal Matrix: \[ D = \begin{pmatrix} d_1 & 0 & \cdots & 0 \ 0 & d_2 & \cdots & 0 \ \vdots & \vdots & \ddots & \vdots \ 0 & 0 & \cdots & d_n \end{pmatrix} \]
Symmetric Matrix: \[ A = A^T \]
Orthogonal Matrix: \[ A^T A = A A^T = I \]
Positive Definite Matrix: A symmetric matrix ( A ) is positive definite if ( \mathbf{x}^T A \mathbf{x} > 0 ) for all non-zero vectors ( \mathbf{x} ).
Quadratic Forms
Quadratic Form: \[ Q(\mathbf{x}) = \mathbf{x}^T A \mathbf{x} \] where ( A ) is a symmetric matrix.
Classification of Quadratic Forms:
- Positive definite: All eigenvalues of ( A ) are positive.
- Negative definite: All eigenvalues of ( A ) are negative.
- Indefinite: Eigenvalues of ( A ) are both positive and negative.
- Positive semidefinite: All eigenvalues of ( A ) are non-negative.
- Negative semidefinite: All eigenvalues of ( A ) are non-positive.
Singular Value Decomposition (SVD)
SVD Definition: For any ( m \times n ) matrix ( A ), there exist orthogonal matrices ( U ) and ( V ) such that \[ A = U \Sigma V^T \] where ( \Sigma ) is a diagonal matrix with non-negative real numbers on the diagonal (singular values).
Properties of SVD:
- ( U ) and ( V ) are orthogonal matrices.
- ( \Sigma ) contains the singular values of ( A ).
Orthogonal Projections and Least Squares
Orthogonal Projection onto a Subspace ( W ): \[ \mathbf{p} = A (A^T A)^{-1} A^T \mathbf{b} \] where ( A ) is a matrix whose columns form a basis for ( W ).
Least Squares Solution: To solve ( A \mathbf{x} \approx \mathbf{b} ): \[ \mathbf{x} = (A^T A)^{-1} A^T \mathbf{b} \]
Eigenvalues and Eigenvectors
Eigenvalue Equation: \[ A \mathbf{v} = \lambda \mathbf{v} \]
Characteristic Polynomial: \[ \det(A - \lambda I) = 0 \]
Solving for Eigenvalues: Find the roots of the characteristic polynomial.
Solving for Eigenvectors: Solve \( (A - \lambda I) \mathbf{v} = 0 \) for each eigenvalue ( \lambda ).
Jordan Canonical Form
Jordan Form: A matrix ( A ) is similar to a Jordan matrix ( J ) if there exists an invertible matrix ( P ) such that \[ A = P J P^{-1} \]
Jordan Block: \[ J_k(\lambda) = \begin{pmatrix} \lambda & 1 & 0 & \cdots & 0 \ 0 & \lambda & 1 & \cdots & 0 \ \vdots & \vdots & \vdots & \ddots & \vdots \ 0 & 0 & 0 & \cdots & \lambda \end{pmatrix} \]
Steps to Find Jordan Form:
- Find the eigenvalues of ( A ).
- Find the eigenvectors and generalized eigenvectors of ( A ).
- Form the Jordan blocks.
- Form the matrix ( P ) using the eigenvectors and generalized eigenvectors.
Complex Eigenvalues and Eigenvectors
Complex Eigenvalue Equation: \[ A \mathbf{v} = \lambda \mathbf{v} \]
Complex Characteristic Polynomial: \[ \det(A - \lambda I) = 0 \]
Solving for Complex Eigenvalues: Find the complex roots of the characteristic polynomial.
Solving for Complex Eigenvectors: Solve \( (A - \lambda I) \mathbf{v} = 0 \) for each complex eigenvalue ( \lambda ).
Forming the Complex Matrix: Combine the real and imaginary parts of the eigenvectors to form a complex matrix.
Properties of Complex Eigenvalues:
- Complex eigenvalues occur in conjugate pairs.
- The eigenvectors corresponding to complex eigenvalues are also complex.
Matrix Exponentials
Definition: The matrix exponential of ( A ) is defined as \[ e^A = \sum_{k=0}^\infty \frac{A^k}{k!} \]
Properties:
- ( e^{A+B} = e^A e^B ) if ( AB = BA ).
- ( e^{tA} ) solves the differential equation ( \mathbf{x}' = A \mathbf{x} ) with initial condition ( \mathbf{x}(0) = \mathbf{x}_0 ).
Systems of Linear Differential Equations
General Form: \[ \mathbf{x}' = A \mathbf{x} \]
Solution Using Eigenvalues and Eigenvectors:
- Find the eigenvalues and eigenvectors of ( A ).
- Form the general solution using the eigenvalues and eigenvectors.
- Solve for the constants using the initial conditions.
Solution Using Matrix Exponentials: \[ \mathbf{x}(t) = e^{At} \mathbf{x}_0 \]
Diagonalizable Case: If ( A ) is diagonalizable, then \[ e^{At} = P e^{Dt} P^{-1} \] where ( D ) is a diagonal matrix containing the eigenvalues of ( A ).
Non-Diagonalizable Case: If ( A ) is not diagonalizable, use the Jordan form to compute the matrix exponential.