Consider the number of operations required to perform row reduction. Let \(A\) be an \(n \times n\) matrix. To consider the worst case, we will assume that \(A\) has all non-zero entries (if the matrix has some zero entries, this reduces the computations required).
The first step of elimination requires making zeros in the first column, through row replacement operations. Each row replacement operation (\(R_i + k R_j \rightarrow R_i\)) basically involves a single multiplication (of \(k\) times an entry in \(R_j\)) and a single addition (\(R_i + k R_j\)). For simplicity, we will assume that each operation (addition, subtraction, multiplication, division) all take roughly the same amount of time.
We call a single one of these operations a flop (short for floating point operation). Traditionally, flop referred only to a multiplication or division, because these operations took somewhat more time and the more simple operations of addition and subtraction, and so the latter could be ignored, relatively speaking. With modern technology, all operations take about the same time.
Then, modifying a single entry with a replacement row operation requires 2 flops. Then, the first step in row reduction requires modifying \(n-1\) rows of the matrix (every row except the pivot row), and each of these rows has \(n\) entries. So, the first step requires \(2n(n-1)\) flops.
The second step involves a submatrix of size \((n-1) \times (n-1)\text{.}\) Here, row reduction involves \(n-2\) rows, each with \(n-1\) entries, for a total for \(2(n-1)(n-2)\) flops.
Continuing in this way, the subsequent steps require \(2(n-2)(n-3), 2(n-3)(n-4), \dots\) steps. The final step, acting on a \(2 \times 2\) submatrix, requries \(2 \cdot 2 \cdot 1\) flops. Thus, the total number of flops is,
\begin{equation*}
= 2n(n-1) + 2(n-1)(n-2) + \dots + 2 \cdot 3 \cdot 2 + 2 \cdot 2 \cdot 1
\end{equation*}
In summation notation, this can be written as (in increasing order),
\begin{equation*}
= \sum_{k=1}^{n-1} 2 k(k + 1)
\end{equation*}
This sum can be evaluated to find an explicit formula,
\begin{align*}
\amp = 2 \sum_{k=1}^{n-1} (k^2 + k)\\
\amp = 2 \brac{\sum_{k=1}^{n-1} k^2 + \sum_{k=1}^{n-1} k}
\end{align*}
Then, we can apply the summation formulas,
\begin{equation*}
\sum_{k=1}^n k^2 = \frac{n(n+1)(2n+1)}{6} \qquad \sum_{k=1}^n k = \frac{n(n+1)}{2}
\end{equation*}
Here, the upper limit of the summation is \(n-1\) rather than \(n\text{,}\) so we substitute \(n-1\) for \(n\) when applying the formulas,
\begin{align*}
\amp = 2 \brac{\frac{(n-1) \cdot n \cdot (2(n-1)+1)}{6} + \frac{(n-1) \cdot n}{2}}
\end{align*}
Then, simplifying,
\begin{align*}
\amp = 2 \brac{\frac{n(n-1)(2n-1) + 3n(n-1)}{6}}\\
\amp = 2 \brac{\frac{n(n-1)((2n-1)+3)}{6}}\\
\amp = 2 \brac{\frac{n(n-1)(2n+2)}{6}}\\
\amp = \frac{2n(n-1)(n+1)}{3}
\end{align*}
Then, consider the asymptotic behavior. When \(n\) is large,
\begin{equation*}
\frac{2n(n-1)(n+1)}{3} \approx \frac{2n^3}{3}
\end{equation*}
In particular, about \(n^3\) operations.
Next, consider back substitution. Here, the first step requires making a 0 everywhere above the bottom-right entry in the \(n\)th column. There are \(n-1\) entries, and again each uses a single row replacement, which is made up of 2 flops. So, \(2(n-1)\) flops. The next step involves \(n-2\) entries, and so has \(2(n-2)\) flops. Continuing in this way, the last step involving changing the single entry about the pivot in the \(2nd\) column, for \(2 \cdot 1\) flops. Then, in total, the number of flops is,
\begin{align*}
\amp = 2(n-1) + 2(n-2) + \dots + 2 \cdot 2 + 2 \cdot 1\\
\amp = \sum_{k=1}^{n-1} 2k\\
\amp = 2 \sum_{k=1}^{n-1} k\\
\amp = 2 \cdot \frac{n(n-1)}{2}\\
\amp = n(n-1)
\end{align*}
As \(n\) becomes large, this is approximately \(n^2\) flops.