The page you are reading is part of a draft (v2.0) of the "No bullshit guide to math and physics."

The text has since gone through many edits and is now available in print and electronic format. The current edition of the book is v4.0, which is a substantial improvement in terms of content and language (I hired a professional editor) from the draft version.

I'm leaving the old wiki content up for the time being, but I highly engourage you to check out the finished book. You can check out an extended preview here (PDF, 106 pages, 5MB).


Applications of Gauss-Jordan elimination

In this section we'll learn about a practical algorithm for the characterization of vector spaces. Actually, the algorithm is not new: you already know about the Gauss-Jordan elimination procedure that uses row operations to transform any matrix into its reduced row echelon form. In this section we'll see how this procedure can be used to find bases for all kinds of vector spaces.

Finding a basis

Suppose we have a vector space $V$ defined as the span of some set of vectors $\{ \vec{v}_1, \vec{v}_2, \ldots, \vec{v}_m \}$: \[ V = \textrm{span}\{ \vec{v}_1, \vec{v}_2, \ldots, \vec{v}_m \}. \] Your task will be to find a basis for $V$.

Recall that a basis is the minimal set of linearly independent vectors $\{ \vec{e}_1, \vec{e}_2, \ldots, \vec{e}_n \}$ which allow us to write any $\vec{v} \in V$ as $\vec{v} = v_1\:\vec{e}_1 + v_2\:\vec{e}_2 + \cdots +v_n\:\vec{e}_n$. In other words, we are looking for an alternate description of the vector space $V$ as \[ V = \textrm{span}\{ \vec{e}_1, \vec{e}_2, \ldots, \vec{e}_n \}, \] such that the set $\{ \vec{e}_1, \vec{e}_2, \ldots, \vec{e}_n \}$ are all linearly independent.

One way to accomplish this task is to write the vectors $\vec{v}_i$ as the rows of a matrix $M$. By this construction, the space $V$ corresponds to $\mathcal{R}(M)$, the row space of the matrix $M$. We can now use the standard row operations to bring the matrix into the reduced row echelon form. Applying row operations to a matrix does not change its row space. By transforming the matrix into its RREF, we will be able to see which of the row were linearly independent and can serve as basis vectors $\vec{e}_j$:

\[ \left[\;\;\;\; \begin{array}{rcl} - & \vec{v}_1 & - \nl - & \vec{v}_2 & - \nl - & \vec{v}_3 & - \nl & \vdots & \nl - & \vec{v}_m & - \end{array} \;\;\;\;\right] \quad - \ \textrm{ G-J elim.} \to \quad \left[\;\;\;\;\begin{array}{rcl} - & \vec{e}_1 & - \nl & \vdots & \nl - & \vec{e}_n & - \nl 0 &\;\; 0 \;\; & 0 \nl 0 & \;\; 0 \;\; & 0 \end{array} \;\;\;\;\right]. \] The non-zero rows in the RREF of the matrix form a set of linearly independent vectors $\{ \vec{e}_1, \vec{e}_2, \ldots, \vec{e}_n \}$ that span the vector space $V$. Any vectors that were not linearly independent have been reduced to rows of zeros.

The above process is called “finding a basis” and it is important to understand how to carry out the steps. Even more important is for you to understand why we are doing this. In the end we still have the same space $V$ just described in terms of some new vectors. Why is the description of the vector space $V$ in terms of the vectors $\{ \vec{e}_1, \vec{e}_2, \ldots, \vec{e}_n \}$ any better than the original description in terms of the vectors $\{ \vec{v}_1, \vec{v}_2, \ldots, \vec{v}_m \}$? The description of $V$ in terms of a basis of $n$ linearly independent vectors shows the space $V$ is $n$-dimensional.

TODO: say also unique coodinates w.r.t. B_e, not w.r.t. B_v

Definitions

  • $B_S = \{ \vec{e}_1, \vec{e}_2, \ldots, \vec{e}_n \}$: a basis for an $n$-dimensional

vector space $S$ is a set of $n$ linearly independent vectors in the space $S$.

  Any vector is $\vec{v} \in S$ can be written as a linear combination of the basis elements:
  \[ \vec{v} = v_1 \vec{e}_1 + v_2 \vec{e}_2 + \cdots + v_n \vec{e}_n. \]
  The basis for an $n$-dimensional contains exactly $n$ vectors.
* $\textrm{dim}(S)$: the dimension of the space $S$ is equal to the number of elements
  in a basis for $S$.

NOINDENT Recall the four fundamental spaces of a matrix $M \in \mathbb{R}^{m \times n}$ that we defined in the previous section:

  • $\mathcal{R}(M)$: the row space of a matrix $M$, which consists of all possible linear

combinations of the rows of the matrix $M$.

  • $\mathcal{C}(M)$: the column space of a matrix $M$, which

consists of all possible linear combinations of the columns of the matrix $M$.

  • $\textrm{rank}(M)$: the rank of the matrix $M$. The ranks is equal to the number

of linearly independent columns and rows:

  $\textrm{rank}(M)=\textrm{dim}(\mathcal{R}(M))=\textrm{dim}(\mathcal{C}(M))$.
* $\mathcal{N}(M)$: the //null space// a matrix $M$, 
  which is the set of vectors that the matrix $M$ sends to the zero vector:
  \[
    \mathcal{N}(M) \equiv \{ \vec{v} \in \mathbb{R}^n \;\;| \;\;M\vec{v} = \vec{0} \}.
  \]
* $\textrm{dim}(\mathcal{N}(M))$: the dimension of the null space, 
  also known as the //nullity// of $M$.
* $\mathcal{N}(M^T)$: the //left null space// a matrix $M$, 
  which is the set of vectors that the matrix $M$ sends to the zero vector 
  when multiplied on the left:
  \[
    \mathcal{N}(M^T) \equiv \{ \vec{w} \in \mathbb{R}^m \;\;| \;\;\vec{w}^T M = \vec{0} \}.
  \]

Bases for fundamental spaces

The procedure we described in the beginning of this section can be used ``distill'' any set of vectors into a set of linearly independent vectors that form a basis. Indeed, the Gauss–Jordan elimination procedure allows us to find a simple basis for the row space $\mathcal{R}(M)$ of any matrix.

How do we find bases for the other fundamental spaces of a matrix? We'll now show how to use the RREF of a matrix $A$ to find bases for $\mathcal{C}(A)$ and $\mathcal{N}(A)$. Pay careful attention to the locations of the pivots (leading ones) in the RREF of $A$ because they play a crucial role in what follows.

Basis for the row space

The row space $\mathcal{R}(A)$ of a matrix $A$ is defined as the space of all vectors that can be written as a linear combinations of the rows of $A$. To find a basis for $\mathcal{R}(A)$, we use the Gauss–Jordan elimination procedure:

  1. Perform row operations to find the RREF of $A$.
  2. Read-off the non-zero rows.

Basis for the column space

To find a basis for the column space $\mathcal{C}(A)$ of a matrix $A$ you need to find which of the columns of $A$ are linearly independent. To find the linearly independent columns of $A$, use the following steps:

  1. Perform row operations to find the RREF of $A$.
  2. Identify the columns which contain the pivots (leading ones).
  3. The corresponding columns in the original matrix $A$

form a basis for the column space of $A$.

This procedure works because elementary row operations do not change the independence relations between the columns of the matrix. If two columns are independent in the reduced row echelon form, they were independent in the original matrix as well.

Note that the column space of the matrix $A$ corresponds to the row space of the matrix transposed $A^T$. Thus, another algorithm for finding the column space of a matrix $A$ would be to use the row space algorithm on $A^T$.

Basis for the null space

The null space $\mathcal{N}(A)$ of a matrix $A \in \mathbb{R}^{m \times n}$ is \[ \mathcal{N}(A) = \{ \vec{x}\in \mathbb{R}^n \ | \ A\vec{x} = \vec{0} \: \}. \] In words, the null space is the set of solutions of the equation $A\vec{x}=\vec{0}$.

The vectors in the null space are orthogonal to the row space of the matrix $A$. We can easily find the null space by working with the RREF of the matrix. The steps involved are as follows:

  1. Perform row operations to find the RREF of $A$.
  2. Identify the columns that do not contain a leading one.

These columns correspond to free variables of the solution.

  For example, consider a matrix whose reduced row echelon form is 
  \[ 
  \textrm{rref}(A) = 
  \begin{bmatrix} 
     \mathbf{1} & 2 & 0 & 0 \nl  
     0 & 0 & \mathbf{1} & -3 \nl  
     0 & 0 & 0 & 0
  \end{bmatrix}.
  \]
  The second column and the fourth column do not contain leading ones (pivots),
  so they correspond to free variables, which are customarily called $s$, $t$, $r$, etc.
  We are looking for a vector with two free variables $(x_1,s,x_3,t)^T$.
- Rewrite the null space problem as a set of equations:
  \[
  \begin{bmatrix}
      1 & 2 & 0 & 0 \nl  
      0 & 0 & 1 & -3 \nl  
      0 & 0 & 0 & 0
   \end{bmatrix}
   \begin{bmatrix}
x_1 \nl s \nl x_3 \nl t
   \end{bmatrix}
   =
   \begin{bmatrix}
     0 \nl 0 \nl 0 
   \end{bmatrix}
\qquad 
\Rightarrow
\qquad 
   \begin{array}{rcl}
      1x_1 + 2s			&=&0 \nl
      1x_3 - 3t			&=&0 \nl
      0				&=&0
   \end{array}
  \]
  We can express the unknowns $x_1$ and $x_3$ in terms of the free variables $s$ and $t$
  as follows $x_1 = -2s$ and $x_3=3t$. 
  We now have an expression for all vector in the null space: $(-2s,s,3t,t)^T$, 
  for any $s,t \in \mathbb{R}$.
  We can rewrite the solution by splitting the $s$-part and the $t$-part:
  \[
   \begin{bmatrix}
x_1 \nl x_2 \nl x_3 \nl x_3
   \end{bmatrix}
   =
   \begin{bmatrix}
-2s \nl s \nl 3t \nl t
   \end{bmatrix}
   =
   \begin{bmatrix}
-2 \nl 1 \nl 0 \nl 0
   \end{bmatrix}\!s
   +
   \begin{bmatrix}
0 \nl 0 \nl 3 \nl 1
   \end{bmatrix}\!t.
  \]
- The direction vectors associated with each free variable form a basis for the null space of the matrix:
  \[
     \mathcal{N}(A) = 
       \left\{ \begin{bmatrix}-2s \nl s \nl  3t \nl t \end{bmatrix}, \forall s,t \in \mathbb{R} \right\}
     = \textrm{span}\left\{ \begin{bmatrix}-2\nl 1\nl0\nl0\end{bmatrix}, 
    \begin{bmatrix}0\nl0\nl 3\nl1\end{bmatrix} \right\}.
  \]
 

You can verify that the matrix $A$ times any vector in the null space produces a zero vector.

Examples

Example 1

Find a basis for the row space, the column space, and the null space of the matrix: \[ A = \left[\begin{array}{ccc}4 & -4 & 0\\1 & 1 & -2\\2 & -6 & 4\end{array}\right]. \] The first steps towards finding the row space, column space, and the null space of a matrix all require calculating the RREF of the matrix, so this is what we'll begin with.

  • Let's focus on the first column.

To create a pivot in the top left corner, we divide the first row by four:

  $R_1 \gets \frac{1}{4}R_1$:
  \[\left[\begin{array}{ccc}1 & -1 & 0\\1 & 1 & -2\\2 & -6 & 4\end{array}\right].\]
* We use this pivot to clear the numbers on the second and third row below it
  by performing $R_2 \gets R_2 -R_1$ and  $R_3 \gets R_3 -2R_1$:
  \[\left[\begin{array}{ccc}1 & -1 & 0\\0 & 2 & -2\\0 & -4 & 4\end{array}\right].\]
* We can create a pivot in the second row if we divide it by two
  $R_2 \gets \frac{1}{2}R_2$:
  \[\left[\begin{array}{ccc}1 & -1 & 0\\0 & 1 & -1\\0 & -4 & 4\end{array}\right].\]
* We now clear the column below it using $R_3 \gets R_3 +4R_2$:
  \[\left[\begin{array}{ccc}1 & -1 & 0\\0 & 1 & -1\\0 & 0 & 0\end{array}\right].\]
* The final simplification is to clear the $-1$ above in the top of the second column
  using: $R_1 \gets R_1 + R_2$:
  \[\left[\begin{array}{ccc}1 & 0 & -1\\0 & 1 & -1\\0 & 0 & 0\end{array}\right].\]

Now that we have the RREF of the matrix, we can answer the questions.

Before we get to finding the bases for the fundamental spaces of $A$, let us first do some basic dimension-counting. Observe that the matrix has just two pivots. We say $\textrm{rank}(A)=2$. This means that both the row space and the column spaces are two-dimensional. Recall the equality: \[ n = \textrm{rank}( A ) \;\;+ \;\;\textrm{dim}( \mathcal{N}(A) ). \] The input space $\mathbb{R}^3$ splits into two types of vectors. Those that are in the row space of $A$ and those that are in the null space. Since we know that the row space is two-dimensional, we can deduce that the null space is going to be $\textrm{dim}( \mathcal{N}(A) ) = n - \textrm{dim}( \mathcal{R}(A) ) = 3 - 2 = 1$ dimensional.

We now proceed to answer the questions posed in the problem:

  • The row space of $A$ consists of the two non-zero vectors in the RREF of $A$:

\[ \mathcal{R}(A) = \textrm{span}\{ (1,0,-1), (0,1,-1) \}. \]

  • To find the column space of $A$, observe that it is the first and the second

columns that contain the pivots in the RREF of $A$. Therefore,

  the first two columns of the original matrix $A$ form a basis for the column
  space of $A$:
  \[ \mathcal{C}(A) = \textrm{span}\left\{ \begin{bmatrix}4 \nl 1 \nl 2 \end{bmatrix},
     \begin{bmatrix}-4\nl 1\nl -6 \end{bmatrix} \right\}. \]
* Let's now find an expression for the null space of $A$.
  First observe that the third column does not contain a pivot.
  This means that the third column corresponds to a free variable
  and can take on any value $x_3= t, \;\;t \in \mathbb{R}$.
  We want to give a description of all vectors $(x_1,x_2,t)^T$ such that:
  \[\left[\begin{array}{ccc}1 & 0 & -1\nl 0 & 1 & -1\nl 0 & 0 & 0\end{array}\right]
    \left[\begin{array}{c}x_1\nl x_2\nl t \end{array}\right]=
    \left[\begin{array}{c}0\nl 0\nl 0 \end{array}\right]
\qquad 
\Rightarrow
\qquad 
   \begin{array}{rcl}
      1x_1  - 1t			&=&0 \nl
      1x_2  - 1t			&=&0 \nl
      0				&=&0 \;.
   \end{array}
  \]
  We find $x_1=t$ and $x_2=t$ and obtain the following final expression for the null space:
  \[ \mathcal{N}(A) = \left\{
       \begin{bmatrix} t \nl t \nl t \end{bmatrix}, \;\;t \in \mathbb{R}\right\}
  = 
  \textrm{span}\left\{ \begin{bmatrix}1\nl 1\nl 1\end{bmatrix} \right\}.
  \]
  The null space of $A$ is one dimensional and consists of all multiples of the vector $(1,1,1)^T$.
Example 2

Find a basis for the row space, column space and null space of the matrix: \[ B = \begin{bmatrix} 1 & 3 & 1 & 4 \nl 2 & 7 & 3 & 9 \nl 1 & 5 & 3 & 1 \nl 1 & 2 & 0 & 8 \end{bmatrix}. \]

First we find the reduced row echelon form of the matrix $B$: \[ \sim \begin{bmatrix} 1 & 3 & 1 & 4 \nl 0 & 1 & 1 & 1 \nl 0 & 2 & 2 & -3 \nl 0 & -1 & -1 & 4 \end{bmatrix} \sim \begin{bmatrix} 1 & 0 & -2 & 1 \nl 0 & 1 & 1 & 1 \nl 0 & 0 & 0 & -5 \nl 0 & 0 & 0 & 5 \end{bmatrix} \sim \begin{bmatrix} 1 & 0 & -2 & 0 \nl 0 & 1 & 1 & 0 \nl 0 & 0 & 0 & 1 \nl 0 & 0 & 0 & 0 \end{bmatrix}. \]

As in the previous example, we begin by calculating the dimensions of the subspaces. The rank of this matrix is $3$ so the column space and the row space will be $3$-dimensional. Since the input space is $\mathbb{R}^4$, this leaves one dimension for the null space. Let us proceed now to find the fundamental subspaces for the matrix $B$.

  • The row space of $B$ consists of the three non-zero vectors in the RREF of $B$:

\[ \mathcal{R}(B) = \textrm{span}\{\; (1,0,-2,0), (0,1,1,0), (0,0,0,1) \}. \]

  • The column space of $B$ is spanned by the first, second and fourth columns

of $B$ since these columns contain the leading ones in the RREF of $B$.

  \[ \mathcal{C}(B) = \textrm{span}\left\{ 
    \begin{bmatrix} 1 \nl  2 \nl  1 \nl  1\end{bmatrix},\;\;
    \begin{bmatrix} 3 \nl  7 \nl  5 \nl  2\end{bmatrix},\;\;
    \begin{bmatrix} 4 \nl  9 \nl  1 \nl  8\end{bmatrix}
  \right\}. \]
* The third column lacks a leading one so it corresponds to a free variable $x_3= t,\;t \in \mathbb{R}$.
  The null space of $B$ is the set of vectors $(x_1,x_2,t,x_4)^T$ such that:    
  \[\begin{bmatrix} 1 & 0 & -2 & 0 \nl  0 & 1 & 1 & 0 \nl  0 & 0 & 0 & 1 \nl  0 & 0 & 0 & 0 \end{bmatrix}
    \left[\begin{array}{c}x_1\\x_2\\x_3\\x_4 \end{array}\right]=
    \left[\begin{array}{c}0\\0\\0\\0 \end{array}\right]
\qquad 
\Rightarrow
\qquad 
   \begin{array}{rcl}
      1x_1  - 2t			&=&0 \nl
      1x_2  + 1t			&=&0 \nl
       x_4 				&=&0 \nl
      0				&=&0\;.
   \end{array}
  \]
  We find the values of $x_1$, $x_2$, and $x_4$ in terms of $t$ and obtain
  \[ \mathcal{N}(A) = \left\{ \begin{bmatrix} 2t \nl -t \nl t \nl 0 \end{bmatrix},
     \;\;t \in \mathbb{R}\right\}
  = 
  \textrm{span}\left\{ \begin{bmatrix}2\\-1\\1\\0 \end{bmatrix} \right\}.
  \]

Discussion

Dimensions

Note that for an $m \times n$ matrix $M \in \mathbb{R}^{m \times n}$ the row space and the column space will consist of vectors with $n$ components, while the column space and the left null space will consist of vectors with $m$ components.

You shouldn't confuse the number of components or the number of rows in a matrix with the dimension of its row space. Suppose we are given a matrix with five rows and ten columns $M \in \mathbb{R}^{5 \times 10}$ and that the RREF of $M$ contains three non-zero rows. The row space of $M$ is therefore $3$-dimensional and a basis for it will consist of three vectors, each vector having ten components. The column space of the matrix will also be three-dimensional, but the basis for it will consist of vectors with five components. The null space of the matrix will be $10-3=7$-dimensional and also consist of $10$-vectors. Finally, the left null space will be $5-3=2$ dimensional and spanned by $5$-dimensional vectors.

Importance of bases

The procedures for identifying bases are somewhat technical and boring, but it is very important that you know how to find bases for vector spaces. To illustrate the importance of a basis consider a scenario in which you are given a description of the $xy$-plane $P_{xy}$ as the span of three vectors: \[ P_{xy}= \textrm{span}\{ (1,0,0), (0,1,0), (1,1,0) \}. \] The above definition of $P_{xy}$ says that any point $p \in P_{xy}$ can be written as a linear combination: \[ p = a (1,0,0) + b(0,1,0) + c(1,1,0) \] for some coefficients $(a,b,c)$. This representation of $P_{xy}$ is misleading. It might make us think (erroneously) that $P_{xy}$ is three-dimensional, since we need three coefficients $(a,b,c)$ to describe arbitrary vectors in $P_{xy}$.

Do we really need three coefficients to describe any $p \in P_{xy}$? No we don't. Two vectors are sufficient: $(1,0,0)$ and $(0,1,0)$ for example. The same point $p$ described above can be written in the form \[ p = \underbrace{(a+c)}_\alpha (1,0,0) + \underbrace{(b+c)}_\beta (0,1,0) = \alpha (1,0,0) + \beta (0,1,0), \] in terms of two coefficients $(\alpha, \beta)$. So the vector $(1,1,0)$ was not really necessary for the description of $P_{xy}$. It was redundant, because it can be expressed in terms of the other vectors. By getting rid of it, we obtain a description of $P_{xy}$ in terms of a basis: \[ P_{xy}= \textrm{span}\{ (1,0,0), (0,1,0) \}. \] Recall that the requirement for a basis $B$ for a space $V$ is that it be made of linearly independent vectors and that it span the space $V$. The vectors $\{ (1,0,0), (0,1,0) \}$ are sufficient to represent any vector in $P_{xy}$ and these vectors are linearly independent. We can conclude (this time correctly) that the space $P_{xy}$ is two-dimensional. If someone asks you “hod do you know that $P_{xy}$ is two dimensional?,” say “Because a basis for it contains two vectors.”

Exercises

Exercise 1

Consider the following matrix: \[ A= \begin{bmatrix} 1 & 3 & 3 & 3 \nl 2 & 6 & 7 & 6 \nl 3 & 9 & 9 & 10 \end{bmatrix} \] Find the RREF of $A$ and use it to find bases for $\mathcal{R}(A)$, $\mathcal{C}(A)$, and $\mathcal{N}(A)$.

NOINDENT Ans: $\mathcal{R}(A) = \textrm{span}\{ (1,3,0,0), (0,0,1,0), (0,0,0,1) \}$, $\mathcal{C}(A) = \textrm{span}\{ (1,2,3)^T, (3,7,9)^T, (3,6,10)^T \}$, and $\mathcal{N}(A)=\textrm{span}\{ (-3,1,0,0)^T \}$.

 
home about buy book