This post explains what the limit of a function is. The limit operation is used in several areas of maths but I present it here as a pre-requisite to understanding calculus. The limit operation is used to formally define calculus operations.

The concept of a limit is easy to understand but it can be tricky to evaluate. The limit of a function, f(x), is the value the function approaches as x approaches a specified value. Notation-wise:

\[\lim _{x\rightarrow a} \ f( x)\]

is asking the question “what does f(x) approach as x approaches a?”. Now in many cases, this is a trivial question. The answer is often just f(a). Graphically:

But if the function does not exist at a, or if the function is not continuous at a (you have to lift your pencil off the sheet to continue drawing the graph at a), the answer is not as obvious.

The common values for a, especially as we will eventually see in calculus, are 0 or ±∞. Let’s look at an example:

Now limits are nice in that you can take limits of the individual parts of a function and combine the limits of the parts. Looking at the numerator 2x + 2, you can see that it goes to infinity as x goes to infinity. So does the denominator x + 1. But you can’t assume that this reduces to 1. This is an example of an indeterminate form: ∞/∞. Another one is 0/0. When these occur, you usually have to do some algebra on the function to simplify it, then try taking the limit again.

Notice that

\[\frac{2x+2}{x+1} \ =\frac{2( x+1)}{x+1} =2\]

And the limit of a constant such as 2, is just 2. It doesn’t matter what x is.

There are other ways to arrive at the same answer but algebraically simplifying the function then retaking the limit is the method that will be used in the basic definitions used in calculus.

In my next post, I will introduce the idea of a rate of change of a function. This will be shown to be the derivative of a function.