Linear Algebra

The Determinant

9/13 in Linear Algebra. See all.

Transformations tend to stretch or shrink (squish) space. So how do we measure by how much it shrinks? We can measure the factor by which a given area increases or decreases.

Notation
Let μij\mu_{ij} be the (n1)×(n1)(n-1)\times(n-1) submatrix of AA obtained by removing the iith row and jjth column of AA.

Determinant
The determinant is the factor by which a given transformation changes any area. The determinant is 00 if the transform transforms space into a lower dimension (for example, R3\mathbb{R}^3 is transformed into a plane, or even a line). Well... ish. Sorry. The determinant can be negative! This is when space is "inverted".

Ex. det([3002])=6.\det\left({\begin{bmatrix}3 & 0\\0 & 2\end{bmatrix}}\right)=6. Do you see why? We stretch the yy-axis by 33, and the xx-axis by 22, so the area of any shape is stretched by 66.

The determinant (det(A)=A\det(A)=|A|) is more formally defined recursively:

  1. If AA is the 1×11\times 1 matrix A=[a11]A=[a_{11}], then det(A)=a11\det(A)=a_{11}.
  2. If AA is an n×nn\times n matrix (where n>1n > 1), then det(A)=j=1n(1)j+1a1jdet(μ1j(A)).\det(A)=\sum_{j=1}^n (-1)^{j + 1}a_{1j} \det(\mu_{1j}(A)).

For a matrix A=[aij]A=[a_{ij}], the scalar quantity det(μij(A))\det(\mu_{ij}(A)) is called a minor of AA, and (1)j+idet(μij(A))(-1)^{j+i}\det(\mu_{ij}(A)) is called a cofactor. To compute the determinant of AA, we can

  1. Compute the cofactor of each element in the first row.
  2. Multiply each element in the first row by its cofactor and sum the results.

We can do this for any row or column, not just the first one. This obviously means that if a matrix has a zero row, the determinant is zero. Similarly, for an upper triangular matrix, the determinant is the product of the diagonal elements. The same is true for diagonal and lower triangular matrices.

Properties

  • (uij(A))T=uji(AT)(u_{ij}(A))^T=u_{ji}(A^T). Therefore, for any square matrix AA, det(A)=det(AT)\det(A)=\det(A^T).
  • If AA has two identical rows or columns, det(A)=0\det(A)=0. That is, if r(A)<nr(A) < n, det(A)=0\det(A)=0.
  • If EE is the elementary matrix corresponding to interchanging two rows of AA, then det(E)=1\det(E)=-1 and det(EA)=det(A)=det(E)det(A)\det(EA)=-\det(A)=\det(E)\det(A).
  • If EE is the elementary row matrix that corresponds to multiplying a row of AA by a scalar λ\lambda, then det(E)=λ\det(E)=\lambda and det(EA)=λdet(A)\det(EA)=\lambda\det(A).
  • If EE is the elementary row matrix corresponding to adding λ\lambda times row ii of AA to row jj, then det(E)=1\det(E)=1 and det(EA)=det(A)\det(EA)=\det(A).
  • For any two n×nn\times n matrices AA and BB, det(AB)=det(A)det(B)\det(AB)=\det(A)\det(B).
    • From this theorem, it follows that if AA is an invertible matrix, det(A1)=(det(A))1\det(A^{-1})=(\det(A))^{-1}.
    • Also, similar matrices have the same determinants (proven by taking determinants of A=P1BPA=P^{-1}BP).
      • Note that this means if V\mathbb{V} is a finite-dimensional vector space and T:VVT:\mathbb{V}\to\mathbb{V} is linear, then det(T)\det(T), defined as the determinant of any matrix representation of TT, is a well-defined scalar, independent of the choice of basis.

A square matrix AA is nonsingular if and only if its determinant is nonzero. There's a proof, but think of it this way: a zero determinant decreases the dimension of space (and therefore the kernel is nontrivial). That means that the transformation isn't invertible. The proof (vaguely) relies on the facts that no elementary row operation has a zero determinant, and therefore if AA is singular, it can be transformed to an upper triangular matrix BB with at least one zero on its diagonal (meaning it has a zero determinant), so the determinant of AA is 00.