Think of the rank of a matrix as the "true dimension" of the space spanned by its rows or columns. It tells you how many directions are genuinely independent or non-redundant within the matrix.
A matrix might have many columns, but if some are just linear combinations of others, the rank tells you the number of columns that are truly needed to represent the core information.
The rank of a matrix \(\mathbf{A}\), denoted \(\text{rank}(\mathbf{A})\), is defined as the dimension of its Column Space (Col(A)).
$$ \text{rank}(\mathbf{A}) = \dim(\text{Col}(\mathbf{A})) $$
Equivalently, it is also the dimension of its Row Space (Row(A)).
$$ \text{rank}(\mathbf{A}) = \dim(\text{Row}(\mathbf{A})) $$
The rank represents the maximum number of linearly independent columns (or rows) in the matrix.
A fundamental theorem states that for any \(m \times n\) matrix \(\mathbf{A}\), the dimension of the row space is equal to the dimension of the column space.
$$ \dim(\text{Row}(\mathbf{A})) = \dim(\text{Col}(\mathbf{A})) = \text{rank}(\mathbf{A}) $$
Gaussian Elimination (Row Echelon Form): Reduce the matrix \(\mathbf{A}\) to its Row Echelon Form (REF) or Reduced Row Echelon Form (RREF) using elementary row operations. The rank is equal to the number of non-zero rows (or equivalently, the number of pivot positions) in the echelon form. This is the most common practical method.
Using Linear Independence: Find the maximum number of columns (or rows) that form a linearly independent set.
\(0 \le \text{rank}(\mathbf{A}) \le \min(m, n)\). The rank cannot exceed the number of rows or columns.
Full Rank:
A matrix has full row rank if \(\text{rank}(\mathbf{A}) = m\) (number of rows). Its rows are linearly independent. Requires \(m \le n\).
A matrix has full column rank if \(\text{rank}(\mathbf{A}) = n\) (number of columns). Its columns are linearly independent. Requires \(n \le m\).
A square \(n \times n\) matrix has full rank if \(\text{rank}(\mathbf{A}) = n\). This is equivalent to the matrix being invertible and having a non-zero determinant.
Rank of Transpose:\(\text{rank}(\mathbf{A}^T) = \text{rank}(\mathbf{A})\).
Rank of Product:\(\text{rank}(\mathbf{AB}) \le \min(\text{rank}(\mathbf{A}), \text{rank}(\mathbf{B}))\).
Rank and Invertibility: An \(n \times n\) matrix \(\mathbf{A}\) is invertible if and only if \(\text{rank}(\mathbf{A}) = n\).
Rank-Nullity Theorem: For an \(m \times n\) matrix \(\mathbf{A}\), \(\text{rank}(\mathbf{A}) + \text{nullity}(\mathbf{A}) = n\), where \(\text{nullity}(\mathbf{A})\) is the dimension of the null space (the space of solutions to \(\mathbf{Ax=0}\)).
The rank tells you the dimension of the output space when the matrix \(\mathbf{A}\) is viewed as a linear transformation. If \(\text{rank}(\mathbf{A}) < n\) for an \(n \times n\) matrix, the transformation collapses the n-dimensional input space into a lower-dimensional subspace (a line, a plane, etc.) which is the column space.
(Visual Idea: A 3x3 matrix with rank 2 maps all points in 3D space onto a specific plane through the origin. An Excalidraw showing this collapse would be illustrative).
Linear Independence: Rank is defined by the maximum number of linearly independent rows/columns.
Basis and Dimension: Rank is the dimension of the column/row space.
Matrix Invertibility & Determinant: A square matrix is invertible \(\iff\) it has full rank \(\iff\) its determinant is non-zero.
Solving Linear Systems \(\mathbf{Ax=b}\): The rank determines the existence and uniqueness of solutions (related via Rank-Nullity Theorem and comparing rank(A) to rank([A|b])).
Dimensionality Reduction (PCA, SVD): Techniques like Singular Value Decomposition (SVD) reveal the rank of a matrix (number of non-zero singular values) and can be used to find low-rank approximations, effectively reducing the dimensionality of data by identifying the most significant "directions" (independent components).
Model Identifiability: In statistics, the rank of design matrices relates to whether model parameters can be uniquely estimated.