Skip to main content

Section 6.2 Linear Independence

The concept of linear independence is crucial to linear algebra.

Subsection 6.2.1 Linear Independence

Definition 6.2.1.

An indexed set of vectors \(\set{\vec{v}_1, \dots, \vec{v}_p}\) in \(\mathbb{R}^n\) is linearly independent if the vector equation,
\begin{equation*} a_1 \vec{v}_1 + \dots + a_p \vec{v}_p = \vec{0} \end{equation*}
has only the trivial solution \(a_1 = 0, a_2 = 0, \dots, a_p = 0\text{.}\) That is, the only linear combination which gives \(\vec{0}\) is the trivial combination.

Definition 6.2.2.

Otherwise, \(\set{\vec{v}_1, \dots, \vec{v}_p}\) is linearly dependent. In other words, \(\set{\vec{v}_1, \dots, \vec{v}_p}\) is linearly dependent if there exists \(a_1, \dots, a_p\text{,}\) not all zero, such that,
\begin{equation*} a_1 \vec{v}_1 + \dots + a_p \vec{v}_p = \vec{0} \end{equation*}
The “zero” linear combination \(a_1 \vec{v}_1 + \dots + a_p \vec{v}_p\) where \(a_1 = \dots = a_p = 0\) is called the trivial linear combination. Any other linear combination is called non-trivial. Then, a collection of vectors is dependent if there exists a non-trivial linear combination which adds up to \(\vec{0}\text{.}\)
The set \(\set{\vec{0}}\) is dependent, because \(a \vec{0} = \vec{0}\) for any \(a \neq 0\text{.}\) Similarly, \(\set{\vec{0}, \vec{v}}\) is dependent, because \(a \cdot \vec{0} + 0 \vec{v} = \vec{0}\) for any \(a \neq 0\) (for example, \(a = 1\)). More generally, any set containing the zero vector is dependent, because a non-trivial linear combination would to have a weight of 1 on \(\vec{0}\) and 0 for every other vector. More precisely, for \(\set{\vec{0}, \vec{v}_1, \dots, \vec{v}_p}\text{,}\)
\begin{equation*} 1 \vec{0} + 0 \vec{v}_1 + \dots + 0 \vec{v}_p = \vec{0} \end{equation*}
A set containing a single vector \(\set{\vec{v}}\) is linearly independent if and only if \(\vec{v}\) is non-zero. This is because the equation \(a\vec{v} = \vec{0}\) has only the trivial solution \(a = 0\) if \(\vec{v} \neq \vec{0}\text{.}\) If \(\vec{v} = \vec{0}\text{,}\) then \(a\vec{0} = \vec{0}\) has non-trivial solutions (any non-zero value for \(a\)).
Consider a set with two vectors \(\set{\vec{v}_1, \vec{v}_2}\text{,}\) and consider the equation,
\begin{equation*} a \vec{v}_1 + b\vec{v}_2 = \vec{0} \end{equation*}
If \(\vec{v}_1, \vec{v}_2\) are linearly dependent, then at least one of \(a, b\) are non-zero. If say \(a \neq 0\text{,}\) then by solving for \(\vec{v}_1\text{,}\)
\begin{equation*} \vec{v}_1 = -\frac{b}{a} \vec{v}_2 \end{equation*}
In other words, \(\vec{v}_1\) is a scalar multiple of \(\vec{v}_2\text{.}\) Conversely, if one vector is a scalar multiple of the other, say \(\vec{v}_1 = k \vec{v}_2\text{,}\) then \(\vec{v}_1 - k \vec{v}_2 = \vec{0}\text{.}\) This is a non-trivial linear combination of \(\set{\vec{v}_1, \vec{v}_2}\text{,}\) and so \(\set{\vec{v}_1, \vec{v}_2}\) is a linearly dependent set.
Geometrically, two vectors are linearly independent if and only if they lie on the same line through the origin.
In summary,
To determine if a set of vectors \(\set{\vec{v}_1, \dots, \vec{v}_p}\) is linearly independent, this involves solving the vector equation,
\begin{equation*} a_1 \vec{v}_1 + \dots + a_p \vec{v}_p = \vec{0} \end{equation*}
for the coefficients \(a_1, \dots, a_p\text{.}\) Recall that this is equivalent to solving the homogeneous system \(A \vec{x} = \vec{0}\) where the columns of \(A\) are \(\vec{v}_1, \dots, \vec{v}_p\text{.}\) Then, row reduce the augmented matrix \(\begin{bmatrix} A \mid \vec{0} \end{bmatrix}\text{.}\) Recall that there is a non-trivial solution if and only if there is a free variable. Equivalently, if and only if there exists a non-zero vector in the null space. Thus, in summary,
If \(\vec{v}_i\) can be written as a linear combination of the other vectors, say \(\vec{v}_i = c_1 \vec{v}_1 + \dots + c_{i-1} \vec{v}_{i-1} + c_{i+1} \vec{v}_{i+1} + \dots + c_p \vec{v}_p\text{.}\) Then, subtracting \(\vec{v}_i\) on both sides,
\begin{equation*} \vec{0} = c_1 \vec{v}_1 + \dots + c_{i-1} \vec{v}_{i-1} + (-1) \vec{v}_i + c_{i+1} \vec{v}_{i+1} + \dots + c_p \vec{v}_p \end{equation*}
is a non-trivial linear dependence relation (since at least the coefficient of \(\vec{v}_i\) is non-zero). Thus, \(S\) is linearly dependent.
Conversely, let \(S\) be linearly dependent. Then, there exists scalars \(a_1, \dots, a_p \in \mathbb{R}\text{,}\) not all zero, such that,
\begin{equation*} a_1 \vec{v}_1 + \dots + a_p \vec{v}_p = \vec{0} \end{equation*}
Then, suppose that \(a_i \neq 0\text{.}\) Then, solving for \(\vec{v}_i\text{,}\)
\begin{equation*} \vec{v}_i = \frac{1}{a_i} \brac{-a_1 \vec{v}_1 - \dots - a_{i-1} \vec{v}_{i-1} - a_{i+1} \vec{v}_{i+1} - a_p \vec{v}_p} \end{equation*}
Thus, \(\vec{v}_i\) can be written as a linear combination of the other vectors.
Consider three vectors \(\vec{u}, \vec{v}, \vec{w}\) in \(\mathbb{R}^3\text{.}\) This set will be linearly dependent if and only if \(\vec{w}\) lies in the plane spanned by \(\vec{u}\) and \(\vec{v}\text{.}\)
Let \(A = \begin{bmatrix} \vec{v}_1 \amp \dots \amp \vec{v}_p \end{bmatrix}\text{.}\) Then, \(A\) is an \(n \times p\) matrix, so the homogeneous equation \(A\vec{x} = \vec{0}\) has \(n\) equations and \(p\) variables. If \(p > n\text{,}\) then there are more varibles than equations, so there is at least one free variable. Thus, \(A\vec{x} = \vec{0}\) has a non-trivial solution, so the columns of \(A\) are linearly dependent.
Note that this does not say that if \(p \leq n\text{,}\) then the set is linearly independent.