Welcome to Calculus. I'm Professor Ghrist. We're about to begin
Lecture 45 on sequences. We now begin the last
chapter of our course, where we build a discrete calculus,
reinventing everything that we've done so far for functions with a digital input. These functions, or sequences, will be
familiar objects to you from throughout the course, but we'll look at them from
a new perspective of discrete calculus. Everything that we have done thus
far in this course has been for functions that might be
described as analog. They are functions whose inputs and
outputs, with few exceptions, vary continuously, if not smoothly. This allows us to talk about limits,
differentials, and builds up calculus. In contrast, so much of what we see and
build is digital, and not analog, built of discrete,
or quantized bits. Whether it's music,
imagery, or information, many things are not amenable
to smooth calculus. For this reason we're going to
reconsider the notion of a sequence. Now we have seen sequences before, but
here is a slightly different perspective. A sequence is a discrete or digital
input function with an analog output. That is, the inputs are natural numbers. The outputs are reals. Now sequences are everywhere,
from economic data to digital signals. But thinking of them as
a function with an input, say n, and an output, a sub n,
is going to be the focus this chapter. The notation that we'll use sometimes is
writing out the terms of the sequence. That is the outputs, a sub 0,
a sub 1, a sub 2, etc. Now our goal is going to be to
redo everything that we have done in this calculus class but
in a digital form, or a discrete form, for functions with a discrete input,
that is for sequences. All of the many things that we
have discovered so far have digital images in this discrete calculus. We'll begin as we began this course, with examples of interesting functions or
sequences. For example there are many sequences
that are polynomial, like n squared. Other sequences do not
grow like a polynomial but are rather exponential,
like the sequence 2 to the n. How would you describe the sequence 1,
-1, 1, -1, repeating? Well seems a little odd at first, but one could describe this as
a trigonometric function, for example as cosine of n times pi. Now more interesting
sequences are out there. For example, what if we remove the pi and
consider the sequence cosine n? If we plot the terms of that
sequence as a function of n, we see a curios mixture
of wavelike behavior. But non-repeatability
the sequence never repeats and in that way it's a little bit unlike
the trigonometric function cosine. This phenomenon is rather curious. You've seen it before if you've
ever seen an object that spins so fast that your visual refresh
rate can't keep up with it and it looks like it's spinning at a different
rate or even in a different direction. This is sometimes called aliasing. Let's begin our construction of discrete
calculus with the notion of a limit. Now what does it mean to take
the limit of a sequence? Well, I believe our adventure begins with
failure because of the discrete input. It doesn't make sense to talk about
getting as close as you want to n = 11. The one circumstance under which it
does makes sense, is to take a limit as n goes to infinity in this setting. Then the original definition of the limit
as n goes to infinity makes sense. We say that that limit is L if for
every epsilon we find some value, let's say capital M, past which the output of the function
lies within epsilon of the limit. This has to continue for
any value of epsilon that is desired. As we change the tolerance on
the output we can find a new tolerance on the input in order
to satisfy the condition. So with that in mind, why would be
bother taking limits of sequences? Well you've done so already. In the context of Taylor series and
approximation, we began this very course
with the discussion of what e was in terms of the limit of a sequence. e is 1 + 1 + one-half + one-sixth, and each of these finite
sums can be thought of as a term in a sequence whose limit is e. At other times in this course, we've implicitly assumed that we
can take a limit of a sequence. We did so in Newton's method, where we glibly stated
that we hope the limit of this sequence of points converges
to the root that we were looking for. Now all of this is good motivation,
but how does one compute limits? The principle that we'll see over and over again is that one can use continuous
methods to solve discrete problems. Let's see a simple example. Consider the sequence, a sub n = (1
+ alpha over n) to the nth power. You've seen this before in the context
where n is x, a continuous variable. Solving for the limit as n goes to
infinity, follows the exact same method. We take the log of both sides,
pull that log inside the limit, and use it to remove
the exponent n out in front. Then we use our knowledge of Taylor
series to expand that logarithm as alpha over n +big O
of 1 over n squared. And this is the key point,
that our use of Taylor series and big O works just as well in the discrete
setting as in the continuous. And so we can evaluate the limit,
exponentiate, and get an answer that
should be very familiar. Another example would be
a sub n = 2 n squared- n square root of quantity 4n squared + 5. We would attack this by
doing some algebraic manipulation, factoring out a 2 n squared. What is left is of the form 1- square
root of 1 + 5 over 4n squared. Here we see another Taylor Series lurking,
that involving the square root. Using the binomial series we can obtain, after a little bit of
algebraic simplification, that the leading order term
is negative five-fourths. Everything else is in big
O of (1 over n squared). And so when we take the limit,
we obtain negative five-fourths. Now not every limit is so transparent. For example, what would you do if asked to
evaluate something of the form square root of 2 + square root of 2 + square root of
2 + square root of 2, etc., all nested. Well, the appropriate way to handle
something like this is to realize it as the limit of a sequence where
we build things up step by step, beginning with root 2 and
then with root 2 + root 2 etc. We want to compute the limit
of this sequence, a sub n. Now in order to do so,
there's one difficult and critical step. That is, you need to determine the
recursion relation that the terms satisfy. In this case,
I'll tell you that it is the following. a sub n = square root of 2 + a sub n-1. Let's say I give you
that recursion relation. How do you compute the limit? Let us, as before,
denote this limit by L and then apply the limit to
the recursion relation. On the left, we have the limit,
as n goes to infinity, of a sub n. That's L. On the right we have an a sub n-1 term. What's the limit of that
as n goes to infinity? Well of course, it too must be L. Now we're assuming that we can slip
that limit in under the square root, let's forget about whether that's legal or
not, and see what happens when we try to
simplify this algebraically. Squaring both sides,
we obtain L squared = 2 + L. With a little bit of rearrangement,
we get a polynomial that easily factors into (L-2) and (L+1). Now, this polynomial has two solutions, namely negative 1 and positive 2. We know that the limit of this
sequence is not a negative number. And therefore, if the limit exists,
it must be equal to 2. Indeed the limit does exist and is 2. Let's look at another limit of this form. In this case, 1+1 over 1+1 over 1+1, etc., all nested together. As before we can write out a sequence
that gets closer and closer to this. a naught is 1, a1 is 1 over 1 + 1,
a2 is 1 over 1 + 1 over 1 over 1 or
something like that, I don't know. It keeps going. The tricky part in this case is to find the recurrence relation
that these terms satisfy. Once again,
I’m going to tell you what it is. It is that a sub n = 1 + 1 over a sub n-1. This instruction tells you
how to build the next term. In order to compute the limit of
the a sub ns, we follow the same procedure as before, taking
the limit of the recursion relation, substituting in L for
a sub n, and a sub n-1. And then, with a little bit of algebraic
rearrangement, what do we see? We get L squared- L- 1 = 0. This has roots 1 plus or
minus square root of 5 over 2. One of these is negative,
the other positive. We'll take that positive square root,
and that is our limit. This particular value is of
some independent interest. It is sometimes called the golden
ratio and given the symbol phi. It equals 1 + root 5 over 2. Now perhaps you've seen
the golden ratio before in the context of some other sequences. Perhaps you've seen
the Fibonacci sequence, that begins with 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, etc. There's a lot of interesting
mathematics hiding behind here. And if you'd like, you might want to
take a look at the bonus material for a hint at what calculus will be able to
tell us about the Fibonacci sequence. We began this course in calculus
with a discussion of functions and a contemplation of
the exponential function. In the same manner, we've begun our
construction of the discrete calculus by considering functions or sequences. In our next lesson we'll look at
the discrete analog of the exponential function, and
see how it relates to derivatives.