The First Fundamental Theorem of Calculus

The First Fundamental Theorem of Calculus


Okay, last time we talked about the
Second Fundamental Theorem of Calculus. That said that if big F is
an antiderivative of little f, in other words, if the derivative
of F is f, then computing definite integrals
of f is easy. To get the definite integral from
a to b, you just apply F at the end points. That’s really useful because the
definition of this is the limit of a sum and it can get really, really ugly. Finding an antiderivative and plugging
it in is much, much easier. But that brings up a question. How do you know that there
is an antiderivative? And if there is, how can you find it? That brings us to the First Fundamental
Theorem of Calculus. The First Fundamental Theorem of Calculus
says that every continuous function has an antiderivative. And in fact, the indefinite integral
is that antiderivative. It says if you compute the indefinite
integral, and then take the derivative of the indefinite integral, you get back
the original function. Let’s look at a picture of
what’s going on. Let’s suppose we’ve got our function f. Here’s our function, y=f(s). Here’s our variable s. We start at a certain point a,
and then we say we’ll define the function i(x) is the
integral from a to x of f(s)ds. In other words, if you pick a value x,
then i(x) is all the area under the curve between a and x. Okay, great! It’s a function. If you changed the value of that function,
you make x a little bit bigger, you pick up some extra area.
You pick up this region. If you make x smaller,
you lose some area. So i(x) is the total accumulated area
as you sweep out, starting from a until you get to x. Let’s go ahead and see if we can take
the derivative of this. We’re not gonna use any of our
derivative formulas like the derivative of x^3 or the chain rule or the
product rule or the quotient rule. We’re gonna go back to the definition
of the derivative. The derivative of i(x) is the limit as
h goes to 0 of [ i (x+h) – i(x) ] / h. Let’s draw the point (x+h)
and see what is i(x+h)? i(x+h) is all the area
from a to (x+h). And i(x) is all the area from a to x. The difference is all of the area
between x and (x+h). In other words, it’s the integral
from x to (x+h) of f(s)ds, and we’re dividing everything by h. How big is this area? Well it’s roughly a rectangle. A little shorter than that rectangle,
a little bigger than that rectangle. But it’s roughly a rectangle,
and the width of that rectangle is h. So if you take the area of this region
and divide by h, what you’re really doing is taking the average height. And you’re averaging between x and (x+h). The height is not exactly constant.
Here, it’s a little bit bigger than here. But if you take a very small value of h,
and you look only in this really small region, then the height doesn’t
change much at all. In fact, as long as a
function is continuous, as you make this thinner and thinner
and thinner and thinner, you get less and less wiggle room
about how high the function is. So the limit is the actual height
at the point x. In other words, the limit is f(x). We have just proved that half of
the Fundamental Theorem of Calculus. What’s the big deal?
Now that we’ve proved it, what can you do with it? One thing is we’ve got three different
notions, remember? We have the definite integral. The definite integral is the
integral from a to b of f(x)dx. Which is the same as the integral
from a to b of f(s)ds, doesn’t matter what we call the variable. We have our indefinite integral, which is
the integral from a to x of f(s)ds. And we’ve got our antiderivative,
which we called F(x). Now the point is we just showed that
the indefinite integral is an antiderivative. For that matter, any antiderivative
is an indefinite integral. All antiderivatives are the same. They’re all the same up to a constant. The same way all indefinite integrals
are the same. See, if we changed the value of a, if we started a over here,
then we would pick up some extra area. That would add a constant to i(x). Different choices of ‘a’ are like
adding constants to f(x). In fact, the notation that people
often use, they would use integral of f(x)dx and they
will call this an indefinite integral. But usually what they mean is that
they’re looking for an antiderivative. It’s bad terminology. An indefinite integral really means this,
but when you see this notation in books, as often as not, people are just
talking about antiderivatives because we’ve just shown indefinite integrals
are antiderivatives. Antiderivatives are indefinite integrals.

Leave a Reply

Your email address will not be published. Required fields are marked *