Subsection 7.1.1 Determinants
There are a few motivations for the determinant. One is as a criterion for invertibility of a matrix, which recall is equivalent to a corresponding system of equations having a unique solution. In particular, a matrix \(A\) is invertible if and only if the system of equations \(A\vec{x} = \vec{b}\) has a unique solution.
First, consider the most basic case, of a \(1 \times 1\) matrix, of the form \(A = \begin{bmatrix} a \end{bmatrix}\text{,}\) which corresponds to the coefficient matrix of the single-equation system,
\begin{equation*}
ax = b
\end{equation*}
\begin{equation*}
\begin{bmatrix} a \end{bmatrix} \begin{bmatrix} x \end{bmatrix} = \begin{bmatrix} b \end{bmatrix}
\end{equation*}
\(A\)\(A^{-1} = \begin{bmatrix} \frac{1}{a} \end{bmatrix}\text{,}\)\(a \neq 0\text{.}\)“system”\(x = \frac{b}{a}\text{,}\)\(a \neq 0\text{.}\)In general, we will define the determinant in such a way so that a matrix is invertible if and only if its determinant is non-zero. In this way, we define the determinant of a \(1 \times 1\) matrix as the entry itself.
Definition 7.1.1.
For a \(1 \times 1\) matrix \(A = \begin{bmatrix} a \end{bmatrix}\text{,}\) the determinant of \(A\) is \(\det{A} = a\text{.}\)
For a \(2 \times 2\) matrix, of the form,
\begin{equation*}
A = \begin{bmatrix} a \amp b \\ c \amp d \end{bmatrix}
\end{equation*}
\begin{align*}
a x_1 + b x_2 \amp = y_1 \\
c x_1 + d x_2 \amp = y_2
\end{align*}
\(A\)\(\Delta = ad - bc \neq 0\text{.}\)
\begin{equation*}
x_1 = \frac{1}{\Delta}(d y_1 - b y_2) \qquad x_2 = \frac{1}{\Delta}(ay_2 - cy_1)
\end{equation*}
\(\Delta = ad - bc \neq 0\text{.}\)“difference of the products of the diagonals”\(2 \times 2\)
Definition 7.1.2.
For a \(2 \times 2\) matrix \(A = \begin{bmatrix} a \amp b \\ c \amp d \end{bmatrix}\text{,}\) the determinant of \(A\text{,}\) \(\det{A}\text{,}\) is defined by,
\begin{equation*}
\det{A} = ad - bc
\end{equation*}
The notation \(\det{A}\) represents the determinant \(\det\) as a function, with argument \(A\text{.}\) That is, the determinant can be thought of as a function whose input is a (square) matrix, and output is a number. When writing the matrix explcitly, we could write,
\begin{equation*}
\det{\begin{bmatrix} a \amp b \\ c \amp d \end{bmatrix}} = ad - bc
\end{equation*}
Determinants are also denoted with vertical bars, as, \(\abs{A} = ad - bc\text{,}\) or more explicitly,
\begin{equation*}
\begin{vmatrix} a \amp b \\ c \amp d \end{vmatrix} = ad - bc
\end{equation*}
Intuitively, these vertical bars are like absolute value bars, because it turns out that the determinant of a matrix can be thought of in some sense as a “magnitude” or “size” of the matrix, as we will see later.
Subsection 7.1.2 Determinants of \(3 \times 3\) Matrices
For \(3 \times 3\) matrices and larger, determinants become more complex. Let \(A\) be a \(3 \times 3\) matrix,
\begin{equation*}
A = \begin{bmatrix} a_{11} \amp a_{12} \amp a_{13} \\
a_{21} \amp a_{22} \amp a_{23} \\
a_{31} \amp a_{32} \amp a_{33} \end{bmatrix}
\end{equation*}
Suppose that \(A\) is invertible. Then, we want to determine what restrictions this places on the entries of \(A\text{,}\) through the row reduction of \(A\text{.}\) First, recall that \(A\) being invertible implies that \(A\) is row equivalent to \(I\text{,}\) and (since \(A\) is square) \(A\) has a pivot in every row and column. Then, first, there is at least one non-zero entry in the first column. Without loss of generality, assume that \(a_{11} \neq 0\) (otherwise, row interchange to get a pivot in the first row). Then, multiply rows 2 and 3 by \(a_{11}\text{,}\) and subtract suitable multiples of row 1, to get,
\begin{equation*}
\begin{bmatrix} a_{11} \amp a_{12} \amp a_{13} \\
a_{11} a_{21} \amp a_{11} a_{22} \amp a_{11} a_{23} \\
a_{11} a_{31} \amp a_{11} a_{32} \amp a_{11} a_{33} \end{bmatrix} \sim \begin{bmatrix} a_{11} \amp a_{12} \amp a_{13} \\
0 \amp a_{11} a_{22} - a_{12} a_{21} \amp a_{11} a_{23} - a_{13} a_{21} \\
0 \amp a_{11} a_{32} - a_{12} a_{31} \amp a_{11} a_{33} - a_{13} a_{31} \end{bmatrix}
\end{equation*}
Notice that we normally would have instead done,
\begin{align*}
R_2 - \frac{a_{21}}{a_{11}} R_1 \amp \longrightarrow R_2\\
R_3 - \frac{a_{31}}{a_{11}} R_1 \amp \longrightarrow R_3
\end{align*}
which also obtains 0's below the first pivot. However, these row operations leads to fractional entries, which just makes the computations more complicated.
Next, again since \(A\) has a pivot in every column, at least one of the \((2,2)\)-entry or the \((3,2)\)-entry is non-zero. Without loss of generality, assume that the \((2,2)\)-entry is non-zero (otherwise, use a row interchange). Then, to obtain a 0 in the \((3,2)\)-entry, multiply row 3 by \(a_{11} a_{22} - a_{12} a_{21}\text{,}\) and add \(-(a_{11} a_{32} - a_{23} a_{31})\) times row 2 to row 3. This results in,
\begin{equation*}
\begin{bmatrix} a_{11} \amp a_{12} \amp a_{13} \\
0 \amp a_{11} a_{22} - a_{12} a_{21} \amp a_{11} a_{23} - a_{13} a_{21} \\
0 \amp (a_{11} a_{32} - a_{12} a_{31})(a_{11} a_{22} - a_{12} a_{21}) \amp (a_{11} a_{33} - a_{13} a_{31})(a_{11} a_{22} - a_{12} a_{21}) \end{bmatrix}
\end{equation*}
and finally,
\begin{equation*}
\begin{bmatrix} a_{11} \amp a_{12} \amp a_{13} \\
0 \amp a_{11} a_{22} - a_{12} a_{21} \amp a_{11} a_{23} - a_{13} a_{21} \\
0 \amp 0 \amp (a_{11} a_{33} - a_{13} a_{31})(a_{11} a_{22} - a_{12} a_{21}) - (a_{11} a_{32} - a_{12} a_{31})(a_{11} a_{23} - a_{13} a_{21}) \end{bmatrix}
\end{equation*}
Then, this matrix is row equivalent to \(I\) only if the \((3,3)\)-entry is non-zero. In other words, the following expression is non-zero,
\begin{align*}
\amp (a_{11} a_{33} - a_{12} a_{21})(a_{11} a_{22} - a_{12} a_{21}) - (a_{11} a_{22} - a_{12} a_{21})(a_{11} a_{23} - a_{12} a_{21})\\
\amp = a_{11} \brac{a_{11} a_{22} a_{33} + a_{12} a_{23} a_{31} + a_{13} a_{21} a_{32} - a_{11} a_{23} a_{32} - a_{12} a_{21} a_{33} - a_{13} a_{22} a_{31}} \neq 0
\end{align*}
By assumption, \(a_{11} \neq 0\text{,}\) so this entry is non-zero if and only if the other factor is non-zero,
\begin{equation*}
a_{11} a_{22} a_{33} + a_{12} a_{23} a_{31} + a_{13} a_{21} a_{32} - a_{11} a_{23} a_{32} - a_{12} a_{21} a_{33} - a_{13} a_{22} a_{31} \neq 0
\end{equation*}
It turns out that the converse is also true, that if this quantity is non-zero, then the matrix \(A\) is invertible. For now,
Definition 7.1.3.
Let \(A\) be a \(3 \times 3\) matrix. Then, the determinant of \(A\text{,}\) \(\det{A}\text{,}\) is defined by,
\begin{equation*}
\boxed{\det{A} = a_{11} a_{22} a_{33} + a_{12} a_{23} a_{31} + a_{13} a_{21} a_{32} - a_{11} a_{23} a_{32} - a_{12} a_{21} a_{33} - a_{13} a_{22} a_{31}}
\end{equation*}
This definition is notationally complex and difficult to remember, so there are multiple alternate interpretations which make hand computations easier.
One pattern is that this is a combination of 6 terms, and each term is a product of 3 entries in the matrix, of one entry from each row and each column.