Table of Contents

Infinity

Let's say you have a length $\ell$ and you want to divide it into infinitely many, infinitely short segments. There are infinitely many of them, but they are infinitely short so they add up to the total length $\ell$.

OK, that sounds complicated. Let's start from something simpler. We have a piece of length $\ell$ and we want to divide this length into $N$ pieces. Each piece will have length: \[ \delta = \frac{\ell}{N}. \] Let's check that, together, the $N$ pieces of length $\delta$ add up to the total length of the string: \[ N \delta = N \frac{\ell}{N} = \ell. \] Good.

Now imagine that $N$ is a very large number. In fact it can take on any value, but always larger and larger. The larger $N$ gets, the more fine grained the notion of “small piece of string” becomes. In this case we would have: \[ \lim_{N\to \infty} \delta = \lim_{N\to \infty} \frac{\ell}{N} = 0, \] so effectively the pieces of string are infinitely small. However, when you add them up you will still get: \[ \lim_{N\to \infty} \left( N \delta \right) = \lim_{N\to \infty} \left( N \frac{\ell}{N} \right) = \ell. \]

The lesson to learn here is that, if you keep things well defined you can use the notion of infinity in your equations. This is the central idea of this course.

Infinitely large

The number $\infty$ is really large. How large? Larger than any number you can think of! Say you think of a number $n$, then it is true that $\infty > n$. But no, you say, actually I thought of a different number $N > n$, well still it will be true that $\infty > N$. In fact any finite number you can think of, no matter how large will always be strictly smaller than $\infty$.

Infinitely small

If instead of a really large number, we want to have a really small number $\epsilon$, we can simply define it as the reciprocal of (one over) a really large number $N$: \[ \epsilon = \lim_{N \to \infty \atop N \neq \infty} \frac{1}{N}. \] However small $\epsilon$ must get, it remains strictly greater than zero $\epsilon > 0$. This is ensured by the condition $N\neq \infty$, otherwise if we would have $\lim_{N \to \infty} \frac{1}{N} = 0$.

The infinitely small $\epsilon>0$ is a new beast like nothing you have seen before. It is a non-zero number that is smaller than any number you can think of. Say you think $0.00001$ is pretty small, well it is true that $0.00001 > \epsilon > 0$. Then you say, no actually I was thinking about $10^{-16}$, a number with 15 zeros after the decimal point. It will still be true that $10^{-16} > \epsilon$, or even $10^{-123} > \epsilon > 0$. Like I said, I can make $\epsilon$ smaller than any number you can think of simply by choosing $N$ to be larger and larger, yet $\epsilon$ always remains non-zero.

Infinity for limits

When evaluating a limit, we often make the variable $x$ go to infinity. This is useful information, for example if we want to know what the function $f(x)$ looks like for very large values of $x$. Does it get closer and closer to some finite number, or does it blow up? For example the negative-power exponential function tends to zero for large values of $x$: \[ \lim_{x \to \infty} e^{-x} = 0. \] In the above examples we also saw that the inverse-$x$ function also tends to zero: \[ \lim_{x \to \infty} \frac{1}{x} = 0. \]

Note that in both cases, the functions will never actually reach zero. They get closer and closer to zero but never actually reach it. This is why the limit is a useful quantity, because it says that the functions get arbitrarily close to 0.

Sometimes infinity might come out as an answer to a limit question: \[ \lim_{x\to 3^-} \frac{1}{3-x} = \infty, \] because as $x$ gets closer to $3$ from below, i.e., $x$ will take on values like $2.9$, $2.99$, $2.999$, and so on and so forth, the number in the denominator will get smaller and smaller, thus the fraction will get larger and larger.

Infinity for derivatives

The derivative of a function is its slope, defined as the “rise over run” for an infinitesimally short run: \[ f'(x) = \lim_{\epsilon \to 0} \frac{\text{rise}}{\text{run}} = \lim_{\epsilon \to 0} \frac{f(x+\epsilon)\ - \ f(x)}{x+\epsilon \ - \ x}. \]

Infinity for integrals

The area under the curve $f(x)$ for values of $x$ between $a$ and $b$, can be though of as consisting of many little rectangles of width $\epsilon$ and height $f(x)$: \[ \epsilon f(a) + \epsilon f(a+\epsilon) + \epsilon f(a+2\epsilon) + \cdots + \epsilon f(b-\epsilon). \] In the limit where we take infinitesimally small rectangles, we obtain the exact value of the integral \[ \int_a^b f(x) \ dx= A(a,b) = \lim_{\epsilon \to 0}\left[ \epsilon f(a) + \epsilon f(a+\epsilon) + \epsilon f(a+2\epsilon) + \cdots + \epsilon f(b-\epsilon) \right], \]

Infinity for series

For a given $|r|<1$, what is the sum \[ S = 1 + r + r^2 + r^3 + r^4 + \ldots = \sum_{k=0}^\infty r^k \ \ ? \] Obviously, taking your calculator and performing the summation is not practical since there are infinitely many terms to add.

For several such infinite series, there is actually a closed form formula for their sum. The above series is called the geometric series and its sum is $S=\frac{1}{1-r}$. How were we able to tame the infinite? In this case, we used the fact that $S$ is similar to a shifted version of itself $S=1+rS$, and then solved for $S$.