The page you are reading is part of a draft (v2.0) of the "No bullshit guide to math and physics."

The text has since gone through many edits and is now available in print and electronic format. The current edition of the book is v4.0, which is a substantial improvement in terms of content and language (I hired a professional editor) from the draft version.

I'm leaving the old wiki content up for the time being, but I highly engourage you to check out the finished book. You can check out an extended preview here (PDF, 106 pages, 5MB).


Projections

In this section we will learn about the projections of vectors onto lines and planes. Given an arbitrary vector, your task will be to find how much of this vector is in a given direction (projection onto a line) or how much the vector lies within some plane. We will use the dot product a lot in this section.

For each of the formulas in this section, you must draw a picture. The picture will make projections and distances a lot easier to think about. In a certain sense, the pictures are much more important so be sure you understand them well. Don't worry about memorizing any of the formulas in this section: the formulas are nothing more than captions to go along with the pictures.

Concepts

  • $S\subseteq \mathbb{R}^n$: a subspace of $\vec{R}^n$.

For the purposes of this chapter, we will use $S \subset \mathbb{R}^3$,

  and $S$ will either be a line $\ell$ or a plane $P$ that **pass through the origin**.
* $S^\perp$: the orthogonal space to $S$.
  We have $S^\perp = \{ \vec{w} \in \mathbb{R}^n \ | \ \vec{w} \cdot S = 0\}$.
* $\Pi_S$: the //projection// onto the space $S$.
* $\Pi_{S^\perp}$: the //projection// onto the orthogonal space $S^\perp$.

Projections

Let $S$ be a vector subspace of $\mathbb{R}^3$. We will define precisely what vector spaces are later on. For this section, our focus is on $\mathbb{R}^3$ which has as subspaces lines and planes through the origin.

The projection onto the space $S$ is a linear function of the form: \[ \Pi_S : \mathbb{R}^n \to \mathbb{R}^n, \] which cuts off all parts of the input that do not lie within $S$. More precisely we can describe $\Pi_S$ by its action on different inputs:

  • If $\vec{v} \in S$, then $\Pi_S(\vec{v}) = \vec{v}$.
  • If $\vec{w} \in S^\perp$, $\Pi_S(\vec{w}) = \vec{0}$.
  • Linearity and the above two conditions imply that,

for any vector $\vec{u}=\alpha\vec{v}+ \beta \vec{w}$,

  $\vec{v} \in S$ and $\vec{w} \in S^\perp$, we have:
  \[
   \Pi_S(\vec{u}) = \Pi_S(\alpha\vec{v}+ \beta \vec{w}) = \alpha\vec{v}.
  \] 

In the above we used the notion of an orthogonal space: \[ S^\perp = \{ \vec{w} \in \mathbb{R}^n \ | \ \vec{w} \cdot S = 0\}, \] where $\vec{w}\cdot S$ means that $\vec{w}$ is orthogonal to any vector $\vec{s} \in S$.

Projections project onto the space $S$ in the sense that, no matter which vector $\vec{u}$ you start from, applying the projection $\Pi_S$ will result in a vector that is part of $S$: \[ \Pi_S(\vec{u}) \in S. \] All parts of $\vec{u}$ that were in the perp space $S^\perp$ will get killed. Meet $\Pi_S$, the $S$-perp killer.

Being entirely inside $S$ or perpendicular to $S$ can be used to split the set vectors $\mathbb{R}^3$. We say that $\mathbb{R}^3$ decomposes into the direct sum of the subspaces $S$ and $S^\perp$: \[ \mathbb{R}^3 = S \oplus S^\perp, \] which means that any vector $\vec{u}\in \mathbb{R}^3$ can be split in terms of a $S$-part $\vec{v}=\Pi_S(\vec{u})$ and a non-$S$ part $\vec{w}=\Pi_{S^\perp}(\vec{u})$ such that: \[ \vec{u}=\vec{v} + \vec{w}. \]

Okay, that is enough theory for now. We now turn to the specific formulas for lines and planes. Let me just say one last fact. A defining property of projection operations is the fact that they are idempotent, which means that it doesn't matter if you project a vector once, twice or a million times: the result will always be the same. \[ \Pi_S( \vec{u} ) = \Pi_S( \Pi_S( \vec{u} )) = \Pi_S(\Pi_S(\Pi_S(\vec{u} ))) = \ldots. \] Once you project to the subspace $S$, any further projections onto $S$ don't do anything.

We will first derive formulas for projection onto lines and planes that pass through the origin.

Projection onto a line

Consider the one-dimensional subspace of the line $\ell$ with direction vector $\vec{v}$ that passes though the origin $\vec{0}$: \[ \ell: \ \{ (x,y,z) \in \mathbb{R}^3 \ | \ (x,y,z)=\vec{0}+ t\:\vec{v}, t \in \mathbb{R} \}. \]

The projection onto $\ell$ for an arbitrary vector $\vec{u} \in \mathbb{R}^3$ is given by: \[ \Pi_\ell( \vec{u} ) = \frac{ \vec{v} \cdot \vec{u} }{ \| \vec{v} \|^2 } \vec{v}. \]

The orthogonal space to the line $\ell$ consists of all vectors that are perpendicular to the direction vector $\vec{v}$. Or mathematically speaking: \[ \ell^\perp: \ \ \{ (x,y,z) \in \mathbb{R}^3 \ | \ (x,y,z)\cdot \vec{v} = 0 \}. \] You should recognize the above equation is the definition of a plane. So the orthogonal space for a line $\ell$ with direction vector $\vec{v}$ is a plane with normal vector $\vec{v}$. Makes sense no?

From what we have above, we can get the projection onto $S^\perp$ very easily. Recall that any vector can be written as the sum of an $S$ part and a $S^\perp$ part: $\vec{u}=\vec{v} + \vec{w}$ where $\vec{v}=\Pi_\ell(\vec{u}) \in S$ and $\vec{w}=\Pi_{\ell^\perp}(\vec{u}) \in S^\perp$. This means that to obtain $\Pi_{\ell^\perp}(\vec{u})$ we can subtract the $\Pi_S$ part from the original vector $\vec{u}$: \[ \Pi_{\ell^\perp}(\vec{u}) = \vec{w} = \vec{u}-\vec{v} = \vec{u} - \Pi_{S}(\vec{u}) = \vec{u} - \frac{ \vec{v} \cdot \vec{u} }{ \| \vec{v} \|^2 } \vec{v}. \] Indeed, we can think of $\Pi_{\ell^\perp}(\vec{u}) = \vec{w}$ as what remains of $\vec{u}$ after we have removed all the $S$ part from it.

Projection onto a plane

Let $S$ now be the two-dimensional plane $P$ with normal vector $\vec{n}$ which passes through the origin: \[ P: \ \ \{ (x,y,z) \in \mathbb{R}^3 \ | \ \vec{n} \cdot (x,y,z) = 0 \}. \]

The perpendicular space $S^\perp$ is given by a line with direction vector $\vec{n}$: \[ P^\perp: \ \{ (x,y,z) \in \mathbb{R}^3 \ | \ (x,y,z)=t\:\vec{n}, t \in \mathbb{R} \}, \] and we have again $\mathbb{R}^3 = S \oplus S^\perp$.

We are interested in finding $\Pi_P$, but it will actually be easier to find $\Pi_{P^\perp}$ first and then compute $\Pi_P(\vec{u}) = \vec{v} = \vec{u} - \vec{w}$, where $\vec{w}=\Pi_{P^\perp}(\vec{u})$.

Since $P^\perp$ is a line, we know how to project onto it: \[ \Pi_{P^\perp}( \vec{u} ) = \frac{ \vec{n} \cdot \vec{u} }{ \| \vec{n} \|^2 } \vec{n}. \] And we obtain the formula for $\Pi_P$ as follows \[ \Pi_P(\vec{u}) = \vec{v} = \vec{u}-\vec{w} = \vec{u} - \Pi_{P^\perp}(\vec{u}) = \vec{u} - \frac{ \vec{n} \cdot \vec{u} }{ \| \vec{n} \|^2 } \vec{n}. \]

Distances revisited

Suppose you have to find the distance between the line $\ell: \{ (x,y,z) \in \mathbb{R}^3 \ | \ (x,y,z)=p_o+t\:\vec{v}, t \in \mathbb{R} \}$ and the origin $O=(0,0,0)$. This problem is equivalent to the problem of finding the distance from the line $\ell^\prime: \{ (x,y,z) \in \mathbb{R}^3 \ | \ (x,y,z)=\vec{0}+t\:\vec{v}, t \in \mathbb{R} \}$ and the point $p_o$. The answer to the latter question is the length of the projection $\Pi_{\ell^\perp}(p_o)$. \[ d(\ell^\prime,p_o) = \left\| \Pi_{\ell^\perp}(p_o) \right\| = \left\| p_o - \frac{ p_o \cdot \vec{v} }{ \| \vec{v} \|^2 } \vec{v} \right\|. \]

The distance between a plane $P: \ \vec{n} \cdot [ (x,y,z) - p_o ] = 0$ and the origin $O$ is the same as the distance between the plane $P^\prime: \vec{n} \cdot (x,y,z) = 0$ and the point $p_o$. We can obtain this distance by find the length of the projection of $p_o$ onto $P^{\prime\perp}$ using the formula above: \[ d(P^\prime,p_o)= \frac{| \vec{n}\cdot p_o |}{ \| \vec{n} \| }. \]

You should try to draw the picture for the above two scenarios and make sure that the formulas make sense to you.

Projections matrices

Because projections are a type of linear transformation, they can be expressed as a matrix product: \[ \vec{v} = \Pi(\vec{u}) \qquad \Leftrightarrow \qquad \vec{v} = M_{\Pi}\vec{u}. \] We will learn more about that later on, but for now I want to show you some simple examples of projection matrices. Let $\Pi$ be the projection onto the $xy$ plane. The matrix that corresponds to this projection is \[ \Pi(\vec{u}) = M_{\Pi}\vec{u} = \begin{pmatrix} 1 & 0 & 0 \nl 0 & 1 & 0 \nl 0 & 0 & 0 \end{pmatrix} \begin{pmatrix} u_x \nl u_y \nl u_z \end{pmatrix} = \begin{pmatrix} u_x \nl u_y \nl 0 \end{pmatrix}. \] As you can see, multiplying by $M_{\Pi}$ has the effect of only selecting the $x$ and $y$ coordinates and killing the $z$ component.

Examples

Example: Color to greyscale

Consider a digital image where the colour of each pixel is specified as an RGB value. Each color pixel is, in some sense, three-dimensional: the red, green and blue dimensions. A pixel of a greyscale image is just one-dimensional and measures how bright the pixel needs to be.

When you tell your computer to convert an RGB image to greyscale, what you are doing is applying the projection $\Pi_G$ of the form: \[ P_G : \mathbb{R}^3 \to \mathbb{R}, \] which is given by following equation: \[ \begin{align*} P_G(R,G,B) &= 0.2989 \:R + 0.5870 \: G + 0.1140 \: B \nl &= (0.2989, 0.5870, 0.1140)\cdot(R,G,B). \end{align*} \]

Discussion

In the next section we will talk about a particular set of projections known as the coordinate projections which we use to find the coordinates of a vector $\vec{v}$ with respect to a given coordinate system: \[ \begin{align*} v_x\hat{\imath} = (\vec{v} \cdot \hat{\imath})\hat{\imath} = \Pi_x(\vec{v}), \nl v_y\hat{\jmath} = (\vec{v} \cdot \hat{\jmath})\hat{\jmath} = \Pi_y(\vec{v}), \nl v_z\hat{k} = (\vec{v} \cdot \hat{k})\hat{k} = \Pi_z(\vec{v}). \end{align*} \] The linear transformation $\Pi_x$ is the projection onto the $x$ axis and similarly $\Pi_y$ and $\Pi_z$ project onto the $y$ and $z$ axes.

It is common in science to talk about vectors as triplets of numbers $(v_x,v_y,v_z)$ without making an explicit reference to the basis. Thinking of vectors as arrays of numbers is fine for computational purposes (to compute the sum of two vectors, you just need to manipulate the coefficients), but it masks one of the most important concepts: the basis or the coordinate system with respect to which the components of the vector are expressed. A lot of misconceptions students have about linear algebra stem from an incomplete understanding of this core concept.

Now since I want you to leave this chapter with a thorough understanding of linear algebra we will now review—in excruciating detail—the notion of a basis and how to compute vector coordinates with respect to this basis.

 
home about buy book