The key to computing determinants efficiently is to consider how determinants are affected when the entries of a matrix change. In particular, by elementary row operations.
First, consider the \(2 \times 2\) case. Let \(A = \begin{bmatrix} a \amp b \\ c \amp d \end{bmatrix}\) be a \(2 \times 2\) matrix, with determinant \(\det{A} = ad - bc\text{.}\) Consider how this determinant is affected by row operations.
For scaling, say row 1 is scaled by \(k\text{,}\)
\begin{align*}
\det{\begin{bmatrix} ka \amp kb \\ c \amp d \end{bmatrix}} \amp = ka(d) - kb(c)\\
\amp = k\brac{ad - bc} = k \cdot \det{A}
\end{align*}
The determinant is also scaled by \(k\text{.}\)
For row exchange, consider exchanging row 1 and 2,
\begin{align*}
\det{\begin{bmatrix} c \amp d \\ a \amp b \end{bmatrix}} \amp = cb - ad\\
\amp = -\brac{ad - bc} = -\det{A}
\end{align*}
This row operations changes the sign of the determinant.
For row replacement, say \(R_2 + k R_1 \rightarrow R_2\text{,}\)
\begin{align*}
\det{\begin{bmatrix} a \amp b \\ c + ka \amp d + kb \end{bmatrix}} \amp = a(d + kb) - b(c + ka)\\
\amp = ad - bc + abk - abk\\
\amp = ad - bc = \det{A}
\end{align*}
This row operation doesn't change the value of the determinant.
It turns out these properties are true for determinants of any size matrix.
Theorem7.3.1.Determinants and elementary row operations.
Let \(A\) be a square matrix, \(B\) be the matrix resulting from \(A\) after a single row operation. Then,
If a multiple of row row is added to another row, then \(\det{B} = \det{A}\text{.}\)
If two rows are interchanged, then \(\det{B} = -\det{A}\text{.}\)
If one row is scaled by \(k \neq 0\text{,}\) then \(\det{B} = k \cdot \det{A}\text{.}\)
In particular, only row interchanges and scalings affect the value of the determinant.
Subsection7.3.2Evaluating Determinants of Higher Order
The preceding properties of determinants leads to a method for evaluating determinants, which is particularly efficient for higher-order matrices. Consider a square matrix \(A\text{.}\) First, use row reduction to convert \(A\) to REF. Matrices in REF are upper triangular, so denote this matrix by \(U\text{.}\) Then, the determinant of \(U\) is the product of its diagonal entries. Then, the determinant of \(A\) will be a multiple of the determinant of \(U\text{,}\)
\begin{equation*}
\det{A} = c \cdot \det{U}
\end{equation*}
where \(c\) is the product of all of the constant factors and sign changes accumulated by row interchanges and scalings. In particular, if we perform the conversion to REF without using scaling operations (which recall is possible to do), then,
This is the algorithm that most computer programs use to evaluate determinants.
Subsection7.3.3Determinants and Invertibility
The previous algorithm for evalauting determinants also provides theoretical insight into the relationship between invertibility and determinants.
If \(A\) is invertible, then all of the diagonal entries of \(A\) are pivots, and so the diagonal entries of \(U\) will all be non-zero. Otherwise, if \(A\) is not invertible, then at least one of the diagonal entires of \(U\) will be equal to 0. This implies that \(\det{U} = 0\text{,}\) and so \(\det{A} = 0\) also. This proves the main theorem relating determinants and invertible matrices.
Theorem7.3.2.Invertible if and only if non-zero determinant.
A square matrix \(A\) is invertible if and only if \(\det{A} \neq 0\text{.}\)
Subsection7.3.4Number of Operations for Computing Determinants
Determinant of an \(n \times n\) matrix has order \(n!\text{.}\)
Exercises7.3.5Exercises
1.
In Python, implement a program that computes determinants using row reduction, and the product of the diagonal elements of a matrix.