Welcome to Calculus. I'm Professor Ghrist. We're about to begin lecture 25 on the definite integral. In this lesson, we'll turn our attention from the indefinite integral, a class of functions, to the definite integral, a numerical quantity. No doubt you've seen definite integrals before. But do you remember how they're defined and what they really mean? This is one of those concepts that takes a few readings to really sink in. In this lesson, we'll give you a fresh look at the definite integral. This lesson is all about adding larger and larger numbers of smaller and smaller local amounts into some global sum. That's not too unusual of a thing to do so let's do so in the context of a simple classical example. Compute the sum, i goes from 1 to n of i. That is 1 plus 2 plus 3 all the way up until n. Now, we could think about this a bit more globally or geometrically by representing each i as a column of i squares each with side length 1. The net sum then looks something like a, a triangle with base n and height n but is disgratized into these squares. What is the sum represented as an area? Well, the area of the triangle would be one half n times n. But this ignores a few small, leftover triangles, each with area one half. How many are there? Well of course there are n such leftover triangles. That yields a net area of one half n times n plus 1. Now if we think about what it would take to add up 1 plus 2 plus 3, all the way up until n, these are local additions, or local computations. On the other hand, this general formula as a function of n is something that is more global, and that's really the key intuition behind what we're about to build. That is, the definite integral. The definite integral is a generalization of this kind of reasoning to more difficult or non-linear sums. The definition of the definite integral is a little bit involved. So stick with me and review again as necessary. We write the integral f of dx as x goes from a to b. As a certain limit. But what is that limit? How do we set it up? Well first we restrict to the interval from a to b, and then we build a partition. That is, we split this interval up into sub-intervals. P1, p2 all the way up to pn, that fill up the domain from left to right. Now, each sub-interval has a width associated to it. This is called delta x sub i. Within each partition element we choose a point x sub i, that is within p sub i. This is called a sampling. Now it doesn't matter which point you choose, just pick one. One per sub interval. Then we first define the Riemann sum to be the sum as i goes from 1 to n of f evaluated at the sampling point times the width delta x sub i, of the partition element. This Riemann sum is often visualized in terms of columns or rectangles sitting over top of the partition. Then, with this in mind, the definite integral is defined to be a limit of Riemann sums where it's an unusual sort of limit. We taking the limit as the partitions get smaller and smaller, as the widths of the partition elements goes to zero. Now you can see that as those widths get smaller and smaller the dependence on the sampling seems to be less and less important. And indeed that intuition does hold true. Now there's a little bit of notation that goes into this. First of all you should notice that that integral sign is really a form of the English letter S in the same way that the summation sign is a form of the Greek letter sigma. Both connote a sum. So a definite integral is really a sum and all of the notation associated with it matches the corresponding notation in the Riemann sum, where dx is something like the limit of delta x as delta x is going to zero. Now, the second thing to note is the limits of integration. One often writes the integral from a to b of f of x dx. I prefer to write the integral as x goes from a to b of f of x dx. This tells you exactly which variable you're talking about in terms of the limits. I'm not always going to use that notation, but I will sometimes, and I suggest you do likewise. Sometimes we'll be sloppy and just write the integral from a to b. Lastly the variable with which you do the integration is not so important. The integral of f of x dx as x goes from a to b is the same as the integral of f of t dt as t goes from a to b. One could use other symbols, still what matters is the value of the integral, not the name of the variable with which you integrate. Sometimes we'll just write the integral of f, from a to b if it's clear which variable we mean. Let's do an example. Compute the definite integral of x dx as x goes from 0 to 1. From our definition, this is a limit of Riemann sums over partitions as the partition elements go to 0 in width. Now since f of x is equal to x that Riemann sum is just the sum of x sub i times the width delta x sub i. Lets choose a particularly nice partition. One which is uniform. That means the widths are constant. Explicitly we're going to set p sub i equal to i minus 1 over n to i over n. This sub interval is going to depend on n, and so we'll have a sequence of partitions. Now, we need to choose a sampling point, 1 x sub in each p sub i. For simplicity, let's just choose the right hand end point, i over n. Then, the width, delta x sub i is a constant 1 over n because we have a uniform partition; therefore, the Riemann sum can be expressed as a limit as n goes to infinity, that is as the widths are going to 0. What's the Riemann sum look like? It looks like the sum i goes from 1 to n of x sub i. That's i over n times the width 1 over n. Now, what's this limit going to look like? Well, we're summing over i and n is a constant, therefore we can factor a 1 over n squared out of the sum and we're left with the sum as i goes from 1 to n of i. And now comes the hard part. Fortunately, we've seen that sum before. What's the sum, i goes from 1 to n of i? Well that's really 1 half n times n plus 1. And now we see that dividing by n squared, the leading order term in this Riemann sum, is 1 half. Everything else is, a higher order, in 1 over n, and hence goes to 0, as n goes to infinity. The answer to this definite integral is 1 half. Indeed, as it must be. Do notice that the difficult part of this computation was that sum of i, as i goes from 1 to n. Note also that the definite integral satisfies certain properties. For example, linearity. If you have the integral of the sum of two functions, f and g. Then it's really the sum of the integrals. Otherwise said, if you add your two integrands together, and then integrate, you get the same thing as if you integrate the pieces, and then add them together. This is true at the level of an individual Riemann sum element. And so it's true in the limit. Likewise if you multiply and integrand f by a scalar c then the integral is equal to that constant c times the integral of f. Again, otherwise said, you can multiply by a constant and then integrate. Or integrate and then multiply by a constant. It doesn't matter. You get to the same place which ever path you take. Again, the reason why this is true is because it's true at the level of Riemann sums and hence to a limit. Another important property is that of additivity. Which states that if you take the integral of f from a to b and add to it the integral of f from b to c, because those limits match up you get the integral of f from a to c. This certainly makes sense at the level of a Riemann sum, you can concatenate these intervals together. We're going to think of it in terms of adding paths together, a perspective that makes sense in the context of orientation. That is, the integral of f from a to b is minus the integral of f from b to a. Now why does this happen? Well let's think of the following terms, if we were to move the integral from b to a over to the left hand side of the equation we would get that the integral from a to b plus the integral from b to a equals 0. Why would that have to be true? Well, from additivity the limits match up and give us the integral from a to a, which clearly must be 0. That's one way to make sense of this orientation property. Another way to think about it is that we are adding directed paths together, and when you add the same path from a to b, with the orientations reversed it's as if the paths cancel and you wind up getting the integral over a point, which is 0. The last property we'll discuss is that of dominance. That states that if f is a non-negative function then the integral of f over an interval is also non-negative. From that follows a, a slightly less obvious result. Namely if you have a function g, which is bigger than f, then g minus f is non-negative. Which means that the integral of g minus f Is non-negative, which by linearity means that if g is bigger than f then the integral of g is bigger than the integral of f. So much for the good news. The bad news is we can hardly compute anything with this definition. There are two definite integrals we can compute. We can compute the integral of a constant by, let's say choosing a uniform partition and then taking the appropriate limit. You can see that you get a constant times the width of the interval. The other integral that we can do is the one that we've done already. The integral of x dx. If we do that over a general interval from a to b, then I'll leave it to you to set up the uniform partition, reduce it to a limit. Then get the answer, which is, as it must be, 1 half times quantity b squared minus a squared. That's about it. There's a little bit more that we can do. For example, if we tried to integrate sine of x or cosine of x. Not over an arbitrary interval but over a symmetric interval from negative L to L. Then there are a few things we would observe. For sine there's a symmetry about the origin which implies that every time you have a partition element on the right, with say a positive value. You get a corresponding partition element on the left with the opposite value. These two will cancel and will give you an integral of 0, because sine of negative x is minus sine of x. For cosine we can't quite do the same thing, but we have a symmetry about the y axis. Which means that every time you have a partition element on the right, it is balanced by a symmetric partition element with the same value of cosine. Therefore, we get a doubling. Because cosine of negative x equals cosine of x, we can reduce this integral to one from 0 to L and double it. This simple example has a more general pattern. We say that sine is an odd function and cosine is an even function. An odd function is one that has this symmetry about the origin or function for which f of minus x is minus f of x. For such a function, the definite integral over a symmetric domain from negative L to L is always 0. Likewise, for an even function, when f of minus x is f of x, then the integral from negative L to L is twice the integral from 0 to L. Another way to think about odd and even functions is that the odd ones have an odd Taylor series and the even ones have an even Taylor series all about 0. Now in general you're going to have to be careful. Definite and indefinite integrals are not the same type of object even though they have similar notation. A definite integral is a number and a limit of sums. The indefinite integral is an anti-derivative in a class of functions. We'll soon see what they have in common. So what do you think of the definite integral? It's not so easy to compute, is it? The definition of the definite integral is like the definition of a derivative. It's crucial. It's complex and it's quickly forgotten. Don't forget this definition. It's important. Fortunately, we won't have to use the definition to do computations. Because of what we'll learn in our next lesson, the fundamental theorem of integral calculus.