Welcome to Calculus. I'm Professor Ghrist. We're about to begin Lecture 56, Bonus Material. In our main lesson we saw, among other things, the Taylor remainder theorem that tells you an explicit bound on the error term for a Taylor expansion of f. We are looking at the inversion for f Taylor expanded about x equals a. The weak version, as you will recall, states that the error E sub N is a function that is in Big O of x- a to the N + 1. The strong version of the theorem says that E sub N of x is exactly the N + 1st derivative of f evaluated at some t between a and x divided by (N+1) factorial times (x-a) to the N+1st. There's no doubt that this is a difficult theorem, both to prove but even to understand. Therefore, let's consider the simplest possible case where N = 0. In this case, what we're really saying is that f(x) is the zeroth order approximation f(a) plus some error term, E sub 0 of x. Where this error is precisely the first derivative of f evaluated at some t divided by 1 factorial times x- a to the 1st power. Now we don't know exactly what the t is, but for some t, this holds. And we have that f(x) = f(a) + f'(t), times (x- a). Now this may look familiar to you. You may recall a result that expresses this exactly. This is the mean value theorem, though perhaps you're used to seeing it in a slightly different form, with the terms rearranged and with a slightly different interpretation. Recall, the classical mean value theorem is interpreted as follows. You have a smooth function f, and you consider points a and x. In this case, look at the secant line that connects these two values on the graph of f, and consider the slope of that secant line that is f(x)- f(a) divided by (x-a). Then there exists some value of t between a and x at which the tangent line to the graph has slope f prime at t that is equal to the slope on the secant line. That classical theorem is really just a simple corollary of the Taylor remainder theorem. Let's turn to the problem of answering why this theorem holds. Now we can't give a complete proof, but we can give the idea of at least the weak version of this theorem. Let's begin with the simplest case where n equals 0. We Taylor expand f(x) about a and obtain f(x) = f(a) plus the error or remainder, E(x). Now how are we going to get a good expression for this? Well, we need a big result. The biggest result that we have at our disposal is the Fundamental Theorem of Integral Calculus. So let's feed this problem into that theorem and see what it says. The fundamental theorem tells us exactly what E(x) is. It is the integral as t goes from a to x of the derivative of f, f'(t)dt. Now this seems like a tautology. According to the Fundamental Theorem that integral equals f evaluated from a to x, that is f(x) minus f(a). So of course it works, but why is this not redundant? It's not redundant because we can start estimating this integral. For example, we can observe that this integral is in big O(x-a) as x is approaching a, or as x-a approaches zero. But that's not all because next we can induct and apply recursion. We know that f is a smooth function, and all of its derivatives are smooth. Therefore, let's take what we know about f and apply it to f prime. I'm going to write f'(t) as f'(a), + some error, some term, that is in big O(t- a). And now, I'm going to feed that expression into the integral estimating E. E(x) is the integral as t goes from a to x of f'(a) + something in big O(t-a). What happens when I integrate that with respect to t? Well, f'(a) is a constant. So when I integrate that, I get a f'(a) times t, evaluated as t goes from a to x. That means f'(a) times quantity (x -a). What happens when I integrate something in big O(t- a)? I get something of the form say, one half, quantity (t-a) squared. The one half doesn't matter cuz we're doing big O. And I take that (t-a) quantity squared and evaluate as t goes from a to x. That's giving me something in big O(x- a) quantity squared. Okay, so you can see we've got the next term in the Taylor expansion by feeding our estimate of the derivative into the integral and integrating. So what are we gonna do now that we have the first order expansion? Well, you guessed it. We're going to recurse or induct and apply this result to f'(t) again. f'(t) is f'(a) + f''(a) times (t-a) + something in big O(t- a) quantity squared. And now as you could guess we're going to feed that into the integral that estimates E(x). E(x) is the integral as t goes from a to x of f'(a) + f''(a) times (t- a) + something in big O(t- a) quantity squared. What happens when we integrate this? Well, just as before, integrating f'(a) gives f'(a) times t evaluated from a to x. Now with the next term, what happens? We have f''(a), that's a constant, it comes out. When we integrate t-a, we get one-half (t- a) quantity squared evaluated from a to x gives one-half f''(a) times (x- a) quantity squared. What happens when we integrate the next term, the big O of (t- a) squared? Well we get something of the form big O (x- a) quantity cubed. And now we see that we have obtained the second order term in the Taylor expansion. And we know that the remainder to that is in big O of ( x- a) quantity cubed. I think this is enough for you to see the pattern of how this works. Now of course, this is not a foolproof of the Taylor remainder theorem. We have not said anything about this precise value of t, which gives you the exact error. I encourage you to, if you're curious, dig a little bit deeper and look up some more complete proofs of the Taylor remainder theorem. Some of them use some very clever ideas, ideas that you should be able to follow with what you have learned in this course.