The page you are reading is part of a draft (v2.0) of the "No bullshit guide to math and physics."

The text has since gone through many edits and is now available in print and electronic format. The current edition of the book is v4.0, which is a substantial improvement in terms of content and language (I hired a professional editor) from the draft version.

I'm leaving the old wiki content up for the time being, but I highly engourage you to check out the finished book. You can check out an extended preview here (PDF, 106 pages, 5MB).


Gram-Schmidt orthogonalization

Suppose you are given a set of $n$ linearly independent vectors $\{ \mathbf{v}_1, \mathbf{v}_2, \ldots, \mathbf{v}_n \}$ taken from an $n$-dimensional space $V$ and you are asked to transform them into an orthonormal basis $\{\hat{\mathbf{e}}_1,\hat{\mathbf{e}}_2,\ldots,\hat{\mathbf{e}}_n \}$ for which: \[ \langle \hat{\mathbf{e}}_i, \hat{\mathbf{e}}_j \rangle =\left\{ \begin{array}{ll} 1 & \textrm{ if } i = j, \nl 0 & \textrm{ if } i \neq j. \end{array}\right. \] This procedure is known as orthogonalization. In this section, we'll learn an intuitive algorithm for converting any set of vectors into a set of orthonormal vectors. The algorithm is called Gram-Schmidt orthogonalization and it uses repeated projection and subtraction operations.

Definitions

  • $V$: An $n$-dimensional vector space.
  • $\{ \mathbf{v}_1, \mathbf{v}_2, \ldots, \mathbf{v}_n \}$: A generic basis for the space $V$.
  • $\{ \mathbf{e}_1, \mathbf{e}_2, \ldots, \mathbf{e}_n \}$: An orthogonal basis for $V$,

is one which satisfies $\mathbf{e}_i \cdot \mathbf{e}_j=0$ if $i\neq j$.

  • $\{ \hat{\mathbf{e}}_1, \hat{\mathbf{e}}_2, \ldots, \hat{\mathbf{e}}_n \}$: An orthonormal basis for $V$

is an orthogonal basis of unit length vectors.

We assume that the vector space $V$ is equipped with an inner product operation: \[ \langle \cdot, \cdot \rangle : V \times V \to \mathbb{R}. \]

The following operations are defined in terms of the inner product:

  • The length of a vector $\|\mathbf{v}\| = \langle \mathbf{v}, \mathbf{v} \rangle$.
  • The projection operation. The projection of the vector $\mathbf{u}$ onto the subspace

spanned by $\mathbf{v}$ is given by:

  \[
   \Pi_{\mathbf{v}}(\mathbf{u}) =  \frac{  \langle \mathbf{u}, \mathbf{v} \rangle }{ \|\mathbf{v}\|^2 } \mathbf{v}.
  \]
* The //projection complement// of the projection $\Pi_{\mathbf{v}}(\mathbf{u})$ is the vector
  $\mathbf{w}$ that we need to add to $\Pi_{\mathbf{v}}(\mathbf{u})$ 
  to get back the //complete// original vector $\mathbf{u}$:
  \[
   \Pi_{\mathbf{v}}(\mathbf{u}) + \mathbf{w} = \mathbf{u}
   \qquad
   \textrm{or}
   \qquad
   \mathbf{w}  = \mathbf{u} - \Pi_{\mathbf{v}}(\mathbf{u}).
  \]
  Observe that the vector $\mathbf{w}$ is, by construction, //orthogonal// to the vector $\mathbf{v}$:
  $\langle \mathbf{u} - \Pi_{\mathbf{v}}(\mathbf{u}), \mathbf{v} \rangle = 0$.

The discussion in this section is in terms of abstract vectors denoted in bold $\mathbf{u}$ and the operations are performed in an abstract inner product space. Thus, the algorithm described below can be used with vectors $\vec{v} \in \mathbb{R}^n$, matrices $M \in \mathbb{R}^{m\times n}$, and polynomials $\mathbf{p} \in P_n(x)$. Indeed, we can talk about orthogonality for any vector space for which we can define an inner product operation.

Orthonormal bases are nice

Recall that a basis for an $n$-dimensional vector space $V$ is any set of $n$ linearly independent vectors in $V$. The choice of basis is a big deal because it is with respect to that basis that we write down the coordinates of vectors and matrices. From the theoretical point of view, all bases are equally good, but from a practical point of view orthogonal and orthonormal bases are much easier to work with.

An orthonormal basis is the most useful kind of basis because the coefficients $(c_1,c_2,c_3)$ of a vector $\mathbf{c}$ can be obtained simply using the inner product: \[ c_1 = \langle \mathbf{c}, \hat{\mathbf{e}}_1 \rangle, \quad c_2 = \langle \mathbf{c}, \hat{\mathbf{e}}_2 \rangle, \quad c_3 = \langle \mathbf{c}, \hat{\mathbf{e}}_3 \rangle. \]

Indeed we can write down any vector $\mathbf{v}$ as \[ \mathbf{v} = \langle \mathbf{v}, \hat{\mathbf{e}}_1 \rangle \hat{\mathbf{e}}_1 + \langle \mathbf{v}, \hat{\mathbf{e}}_2 \rangle \hat{\mathbf{e}}_2 + \langle \mathbf{v}, \hat{\mathbf{e}}_3 \rangle \hat{\mathbf{e}}_3. \] This formula is a generalization of the usual formula for coefficients with respect to the standard basis $\{ \hat{\imath}, \hat{\jmath},\hat{k} \}$: \[ \vec{v} = (\vec{v}\cdot\hat{\imath})\hat{\imath} + (\vec{v}\cdot\hat{\jmath})\hat{\jmath} + (\vec{v}\cdot\hat{k}) \hat{k}. \]

Orthogonalization

As we said earlier, the “best” kind of basis for computational purposes is an orthonormal one like $\{ \hat{\mathbf{e}}_1, \hat{\mathbf{e}}_2, \ldots, \hat{\mathbf{e}}_n \}$. A common task in linear algebra is to upgrade some general set of $n$ linearly independent vectors $\{ \mathbf{v}_1, \mathbf{v}_2, \ldots, \mathbf{v}_n \}$ into an orthonormal basis $\{ \hat{\mathbf{e}}_1, \hat{\mathbf{e}}_2, \ldots, \hat{\mathbf{e}}_n \}$, where the vectors $\{\hat{\mathbf{e}}_i\}$ are all formed as linear combinations of the vectors $\{\mathbf{v}_i\}$. Note that the vector space spanned by both these sets of vectors is the same: \[ V \equiv \textrm{span}\{\mathbf{v}_1,\mathbf{v}_2,\ldots,\mathbf{v}_n \} = \textrm{span}\{\hat{\mathbf{e}}_1,\hat{\mathbf{e}}_2,\ldots,\hat{\mathbf{e}}_n \}, \] but the basis $\{\hat{\mathbf{e}}_1,\hat{\mathbf{e}}_2,\ldots,\hat{\mathbf{e}}_m \}$ is easier to work with since we can compute the vector coefficients using the inner product $u_i = \langle \mathbf{u}, \hat{\mathbf{e}}_i \rangle$.

The technical term for distilling a high quality basis from a low quality basis is called orthogonalization. Note that it is not called orthonormalization, which would be 1) way too long for a word (in German it would be Okay I guess) and 2) over-complicated for nothing. You see, the actual work is in getting the set of vectors $\{ \mathbf{e}_i\}$ which are orthogonal to each other: \[ \mathbf{e}_i \cdot \mathbf{e}_j=0, \quad \textrm{ for all } i \neq j. \] Converting these into an orthonormal basis is then done simply by dividing each vector by its length: $\hat{\mathbf{e}}_i = \frac{\mathbf{e}_i}{ \| \mathbf{e}_i \| }$.

Let's now see how this is done.

Gram-Schmidt orthogonalization

The Gram-Schmidt orthogonalization procedure converts a set of arbitrary vectors $\{ \mathbf{v}_1, \mathbf{v}_2, \ldots, \mathbf{v}_n \}$ into an orthonormal set of vectors $\{ \hat{\mathbf{e}}_1, \hat{\mathbf{e}}_2, \ldots, \hat{\mathbf{e}}_n \}$. The main idea is to take the directions of the vectors $\{ \mathbf{v}_i \}$ one at a time and each time define a new vector $\mathbf{e}_i$ as the orthogonal complement to all the previously chosen vectors $\mathbf{e}_1$, $\mathbf{e}_2$, $\ldots$, $\mathbf{e}_{i-1}$. The orthogonalization algorithm consists of $n$ steps: \[ \begin{align*} \mathbf{e}_1 &= \mathbf{v}_1 & \hat{\mathbf{e}}_1 &= {\mathbf{v}_1 \over \|\mathbf{v}_1\|}, \nl \mathbf{e}_2 &= \mathbf{v}_2-\Pi_{\hat{\mathbf{e}}_1}\!(\mathbf{v}_2), & \hat{\mathbf{e}}_2 &= {\mathbf{e}_2 \over \|\mathbf{e}_2\|}, \nl \mathbf{e}_3 &= \mathbf{v}_3-\Pi_{\hat{\mathbf{e}}_1}\!(\mathbf{v}_3)-\Pi_{\hat{\mathbf{e}}_2}\!(\mathbf{v}_3), & \hat{\mathbf{e}}_3 &= {\mathbf{e}_3 \over \|\mathbf{e}_3\|}, \nl \mathbf{e}_4 &= \mathbf{v}_4-\Pi_{\hat{\mathbf{e}}_1}\!(\mathbf{v}_4)-\Pi_{\hat{\mathbf{e}}_2}\!(\mathbf{v}_4), -\Pi_{\hat{\mathbf{e}}_3}\!(\mathbf{v}_4), & \hat{\mathbf{e}}_4 &= {\mathbf{e}_4 \over \|\mathbf{e}_4\|}, \nl & \vdots &&\vdots \nl \mathbf{e}_n &= \mathbf{v}_n-\sum_{i=1}^{n-1}\Pi_{\hat{\mathbf{e}}_i}\!(\mathbf{v}_n), &\hat{\mathbf{e}}_n &= {\mathbf{e}_n\over\|\mathbf{e}_n\|}. \end{align*} \] In the $j$th step of the procedure, we compute a vector $\mathbf{e}_j$ by starting from $\mathbf{v}_j$ and subtracting all the projections onto the previous vectors $\mathbf{e}_i$ for all $i<j$. In other words, $\mathbf{e}_j$ is the part of $\mathbf{v}_j$ that is orthogonal to all the vectors $\mathbf{e}_1, \mathbf{e}_2, \ldots, \mathbf{e}_{j-1}$.

The above procedure is known as orthogonalization because it splits the vector space $V$ into orthogonal subspaces $V_1, V_2, \ldots, V_n$: \[ W_j = \textrm{span}\{ \mathbf{v} \in V \ | \ \mathbf{v}= \sum_{i=1}^j \alpha_i \mathbf{v}_i \} \setminus \textrm{span}\{ \mathbf{v} \in V \ | \ \mathbf{v}= \sum_{i=1}^{j-1} \alpha_i \mathbf{v}_i \}. \] Recall that the symbol $\setminus$ denotes the set minus operation. The set $A \setminus B$ consists of all elements that are in $A$ but not in $B$.

Observe that the subspaces $V_1, V_2, \ldots, V_n$ are, by construction, mutually orthogonal. Given any vector $\mathbf{u} \in V_i$ and another vector $\mathbf{v} \in V_j, j\neq i$ then $\mathbf{u} \cdot \mathbf{v} = 0$.

The vector space $V$ is sum of these subspaces: \[ V = V_1 \oplus V_2 \oplus V_3 \oplus \cdots \oplus V_n. \] The notation $\oplus$ means orthogonal sum.

Discussion

The main point about orthogonalization that I want you to know is that it can be done. Any “low quality” basis (a set of $n$ linearly independent vectors $\{ \mathbf{v}_1, \mathbf{v}_2, \ldots, \mathbf{v}_n \}$ in an $n$-dimensional space) can be converted into a “high quality” orthonormal basis $\{ \hat{\mathbf{e}}_1, \hat{\mathbf{e}}_2, \ldots, \hat{\mathbf{e}}_n \}$ by using the Gram-Schmidt procedure.

In the next section we will learn how to think about this orthogonalization procedure in terms of matrices, where the Gram-Schmidt procedure is known as the $QR$ decomposition.

 
home about buy book