Welcome to calculus. I'm professor Griest. We're about to begin lecture 42 on fair probability. Lets take what we've learned about averages, moments, centroids, and put them to use in an entirely different setting. That of probability we're going to begin our introduction to probability with a geometric treatment, that is both visual and visceral. Our probability begins with counting. Consider something as simple as asking, what happens when you roll a pair of dice? What's the probability that the sum of the numbers you get adds to greater than seven? Well, in this case, you can simply make a table with all of the possible outcomes for the first and the second die. Add up their face values and identify which of those are greater than 7. When you do so, the probability is a ratio of the number of outcomes that are greater than 7 to the total number of outcomes. In this case, the numbers work out to 15 over 36. That means you have to have a little bit less than a 50-50 chance of getting something bigger than 7. Now, this is simple enough but do you see an integral hiding in there somewhere. Stay tune. Consider a simple situation of spinning a dial. Some outcomes, you win, others you lose. You could compute the probability of a win by counting sectors as before but there's another way to approach this. One can think of the random variable as being an angle where some angles connote a win. Spinning the dial corresponds to choosing an angle at random between 0 and 2 pi. In this case, how would you compute the probability? Well, one could think of it as a ratio between the number of winning angles to the total number of angles. But we're really not counting angles, are we? We're really computing length in this simple example, one would get an answer that is the same as counting segments. But in other examples, we would really need to compute lengths. For example, what's the probability that a randomly chosen angle on the circle has sine larger than one half. Well, we would compute this probability as a ratio, as before. The total number of angles has length to pi, the length of the circle. If we look at those angles that have sine larger than one half, well, that length is 2 pi over 3. Taking the ratio gives us a probability of 1 3rd. We can do the same thing with area. With what probability does a randomly chosen point in a square lie within the inscribed circle. So, take the square, consider the inscribed circle and then choose a point in the square at random. Think of it as throwing a dart. What are the odds that you land inside that circular region, well in this case it's going to be an area fraction. We would take the area of the disc and divide by the area of the square that is our probability. We can compute that easily enough if the radius is r, than the disc has area pie r squared and the square has area 2r quanity squared. That leads to pie over 4 or about 79%. Those are your odds. In some context volume is the appropriate tool. Consider the following, with what probability does a randomly chosen point in a ball lie within the outer 1%. That is within 1% of the boundary as measured by radius. So, we have a solid ball radius r. It's the point that is chosen at random, not the radius. Given a random point in that ball. What are the odds that it's radial coordinate is within the outer 1%? Well, we have to think in terms of volumes since this is a three dimensional ball. We know the volume of a ball of radius r is 4 3rds pie r cubed. So, to compute the probability in this case, what kind of computation should we do? Well, given our formula for volume, it's really easy to compute the volume of a ball, if we pick a point at random in the ball, then the total volume of the possibilities 4 3rds pi r cubed. Now, what are the odds that a randomly chosen point lies within the inner 99%? Well, that too is a ball of radius .99 r. And so, if you look at this fraction, this ratio of volumes this gives us the odds of not being within the outer 1%. So, to compute the odds of being in the outer 1%, we take one minus this ratio. What you'll notice is that there's some convenient cancellation that goes on. The 4 3rd pi cancels. And the r cubed cancel and we're left with 1 minus .99 cubed. That works out to about .0297 et cetera, that means there's about a 3% probability of being within that crust, that outer shell. Notice, it's not 1% because it's the point that is random and not the radius. Now, having done length, area, volume, of coarse, you know what's coming next. That is, high dimensional volume or hypervolume. In this case, let's repeat the problem with a ball of dimension n. In this case, the volume of an n-dimensional ball, radius r, is some constant v sub n times r to the n. You don't have to remember what that constant v sub n is, because if we follow the exact same steps as before, computing the probability of being within that outer 1%. We get the ratio, the volume of that inner 99% ball, which is v sub n times .99r to the nth, divided by the volume of the full n-dimensional ball v sub n r to the n. Just as before the sub ends cancel, the r to the n's cancel and we are left with 1 minus .99 to the n. Now, what happens as n gets large, .99 to the n goes to 0. And we're left with the somewhat surprising conclusion that when the dimension is high enough, for example, if it is 459 or bigger then 99% of the volume of this ball is in the crust. If you pick a point in the ball at random, your odds of being within 1% of the boundary are greater than 99%. That comes as a bit of a surprise. But not if you know how high dimensional volume works. These examples illustrate the basis of what we call fair probability or uniform probability. For a uniform probability distribution over some domain D, whether it's a, a ball or a circle or a square. Then, the probability of a random point being in some subset A, within D is given by a fraction, by a volume fraction. The probability of being within A is the ratio. The volume of A to the volume of D. In this case, what we mean by volume is dependent on the dimension. When we were spinning a dial, we looked at length. When we were choosing a point at random in the square, then we considered area fractions. When we were choosing a point on a ball. Then, we were considering volume, either three dimensional or higher, depending on the dimension of D. And lastly, when we simply rolled dice, we were counting. But that counting is in itself zero dimensional volume. The volume associated to a discreet set. Now, that one example had some unique features, in that there were two variables, that two die and they were independent. And so we could compute probabilities in terms of multiplication. For an example it involves both independent variables and integrals. We're going to consider the classical buffon needle problem. This problem goes as follows. Consider a collection of parallel lines in the plane, separated by some distance, l. Then drop a needle on the plane. Let's say the needle also has length l. Then, what are the odds that you have a crossing that the needle crosses one of the lines. Well, one way to answer this question would be to drop a whole bunch of needles, count how many of them crossed the line and divide by the total number of needles that you dropped. That would be an approximation. But we can do better that. If as stated, we let l denote the distance between these parallel lines and the length of the needle, then what variables do we use to characterize where a randomly tossed needle has fallen? We'll let h denote the horizontal distance from the leftmost tip of the needle to the rightmost, nearest line. But that doesn't completely characterize the needle. We also need to know theta. That is the angle that the needle makes with this line. Now, what are the bounds on h and theta? H can vary between 0 and l. Theta doesn't vary from 0 to 2 pi, but rather from 0 to pi by means of how we've defined h in term of the left tip. Now, given this, how can we say, whether or not, a needle crosses a line. Well, if we set up a right triangle based on that needle, then the hypotenuse is of length l. And we know the angle theta. Therefore, the horizontal width of this triangle is equal to l times sine beta. If that quantity is greater than or equal to h, then we have a crossing of the line. So, if in this theta h plane we graph l times sine theta then, dropping a collection of needles is the same thing as sampling this rectangle at random. And any random point that obeys this inequality, that falls under the curve can note a crossing. And now we can do some calculus because these variables are independent. Changing h doesn't change theta, and changing theta doesn't change h. Then we can compute the probability of a crossing as an area fraction. In this case, taking the ratio of the area under the curve to the total area of the rectangle. What's the area under the curve? Why that's simply the integral. This data goes from 0 to pi of l sin data D theta. What's the area of the entire domain? It's the area of the rectangle, pi times l. Now, I think you can do this integral in your head noticing that the l's cancel, one obtains a probability of 2 over pi. That's very interesting. But maybe more interesting than you might suspect because of the following fact, this probability has pi within it. And we have an interpretation in terms of dropping needles on say a sheet of paper. This principle means that you should be able to approximate the value of pi simply by dropping needles on a field of lines, counting how many times the needle crosses the line. And then dividing by the total number of needles that you dropped. That ratio should come closer and closer to 2 over pi by taking the reciprocal, multiplied by 2. You can, in principle, approximate pi. Maybe you should try it. If you do, one of the things that you'll find is that probability is a somewhat mysterious subject. You'll also learn a thing or two about convergence. Our treatment of probability won't end with volumes and needles and Pi. In fact, we've hardly begun. In our next lesson, we're going to consider what happens when the world isn't fair, or at least when the underlying probability distribution isn't fair. To do that, we'll need to wed our volumetric approach with a more mask based approach from our previous lessons.