Taylor polynomial remainder (part 2) | Series | AP Calculus BC | Khan Academy

Taylor polynomial remainder (part 2) | Series | AP Calculus BC | Khan Academy


In the last video, we started to explore
the notion of an error function. Not to be confused with the expected value because it really does reflect the same
notation. But here E is for error. And we could also thought it will some times here referred to as Reminder
function. And we saw it’s really just the difference
as we, the difference between the function and
our approximation of the function. So for example, this, this distance right
over here, that is our error. That is our error at the x is equal to b. And what we really care about is the
absolute value of it. Because at some points f of x might be
larger than the polynomial. Sometimes the polynomial might be larger
than f of x. What we care is the absolute distance
between them. And so what I want to do in this video is try to bound, try to bound our error at
some b. Try to bound our error. So say it’s less than or equal to some
constant value. Try to bound it at b for some b is greater
than a. We’re just going to assume that b is
greater than a. And we saw some tantalizing, we, we got to
a bit of a tantalizing result that seems like we might be able to
bound it in the last video. We saw that the n plus 1th derivative of
our error function is equal to the n plus 1th
derivative of our function. Or their absolute values would also be
equal to. So if we could somehow bound the n plus
1th derivative of our function over some interval, an
interval that matters to us. An interval that maybe has b in it. Then, we can, at least bound the n plus
1th derivative our error function. And then, maybe we can do a little bit of integration to bound the error itself at
some value b. So, let’s see if we can do that. Well, let’s just assume, let’s just assume
that we’re in a reality where we do know something about the n plus1
derivative of f of x. Let’s say we do know that this. We do it in a color I that haven’t used
yet. Well, I’ll do it in white. So let’s say that that thing over there
looks something like that. So that is f the n plus 1th derivative. The n plus 1th derivative. And I only care about it over this
interval right over here. Who cares what it does later, I just gotta
bound it over the interval cuz at the end of the day I just wanna
balance b right over there. So let’s say that the absolute value of
this. Let’s say that we know. Let me write it over here, let’s say that
we know. We know that the absolute value of the n
plus 1th derivative, the n plus 1th. And, I apologize I actually switch between
the capital N and the lower-case n and I did that in the
last video. I shouldn’t have, but now that you know that I did that hopefully it doesn’t
confuse it. N plus 1th, so let’s say we know that the
n plus 1th derivative of f of x, the absolute value
of it, let’s say it’s bounded. Let’s say it’s less than or equal to some
m over the interval, cuz we only care about
the interval. It might not be bounded in general, but
all we care is it takes some maximum value over
this interval. So over, over, over the interval x, I
could write it this way, over the interval x is a member between a
and b, so this includes both of them. It’s a closed interval, x could be a, x could be b, or x could be anything in
between. And we can say this generally that, that this derivative will have some
maximum value. So this is its, the absolute value,
maximum value, max value, m for max. We know that it will have a maximum value,
if this thing is continuous. So once again we’re going to assume that
it is continuous, that it has some maximum value over this
interval right over here. Well this thing, this thing right over
here, we know is the same thing as the n plus 1 derivative of
the error function. So then we know, so then that, that
implies, that implies that, that implies that the, that’s a new color,
let me do that in blue, or that green. That implies that the, the, the end plus
one derivative of the error function. The absolute value of it because these are
the same thing is also, is also bounded by m. So that’s a little bit of an interesting
result but it gets us no where near there. It might look similar but this is the n
plus 1 derivative of the error function. And, and we’ll have to think about how we
can get an m in the future. We’re assuming that we some how know it
and maybe we’ll do some example problems where we
figure that out. But this is the m plus 1th derivative. We bounded it’s absolute value but we really want to bound the actual error
function. The 0 is the derivative, you could say,
the actual function itself. What we could try to integrate both sides
of this and see if we can eventually get to e, to get to e
of x. To get our, to our error function or our
remainder function so let’s do that. Let’s take the integral, let’s take the
integral of both sides of this. Now the integral on this left hand side,
it’s a little interesting. We take the integral of the absolute
value. It would be easier if we were taking the
absolute value of the integral. And lucky for us, the way it’s set up. So let me just write a little aside here. We know generally that if I take, and it’s
something for you to think about. If I take, so if I have two options, if I
have two options, this option versus and I don’t
know, they look the same right now. I know they look the same right now. So over here, I’m gonna have the integral
of the absolute value and over here I’m going to have the
absolute value of the interval. Which of these is going to be, which of
these can be larger? Well, you just have to think about the
scenarios. So, if f of x is always positive over the
interval that you’re taking the integration, then
they’re going to be the same thing. They’re, you’re gonna get positive values. Take the absolute of a value of a positive
value. It doesn’t make a difference. What matters is if f of x is negative. If f of x, if f of x is negative the entire time, so if this our x-axis, that
is our y-axis. If f of x is, well we saw if it’s positive
the entire time, you’re taking the absolute value of
a positive, absolute value of positive. It’s not going to matter. These two things are going to be equal. If f of x is negative the whole time, then
you’re going to get, then this integral going to
evaluate to a negative value. But then, you would take the absolute
value of it. And then over here, you’re just going to,
this is, the integral going to value to a positive value and it’s still
going to be the same thing. The interesting case is when f of x is
both positive and negative, so you can imagine
a situation like this. If f of x looks something like that, then this right over here, the integral, you’d
have positive. This would be positive and then this would
be negative right over here. And so they would cancel each other out. So this would be a smaller value than if you took the integral of the absolute
value. So the integral, the absolute value of f
would look something like this. So all of the areas are going to be, if
you view the integral, if you view this it is definitely going to be a definite
integral. All of the areas, all of the areas would
be positive. So when you it, you are going to get a bigger value when you take the integral of
an absolute value. Then you will, especially when f of x goes both positive and negative over the
interval. Then you would if you took the integral
first and then the absolute value. Cuz once again, if you took the integral
first, for something like this, you’d get a low value cause this
stuff would cancel out. Would cancel out with this stuff right
over here then you’d take the absolute value of just a lower, a
lower magnitude number. And so in general, the integral, the integral, sorry the absolute value of the
integral is going to be less than or equal to the
integral of the absolute value. So we can say, so this right here is the
integral of the absolute value which is going to be
greater than or equal. What we have written over here is just
this. That’s going to be greater than or equal
to, and I think you’ll see why I’m why I’m doing
this in a second. Greater than or equal to the absolute
value, the absolute value of the integral of, of the n plus
1th derivative. The n plus 1th derivative of, x, dx. And the reason why this is useful, is that
we can still keep the inequality that, this is less
than, or equal to this. But now, this is a pretty straight forward
integral to evaluate. The indo, the anti-derivative of the n
plus 1th derivative, is going to be the nth
derivative. So this business, right over here. Is just going to the absolute value of the
nth derivative. The absolute value of the nth derivative
of our error function. Did I say expected value? I shouldn’t. See, it even confuses me. This is the error function. I should’ve used r, r for remainder. But this all error. The, noth, nothing about probability or
expected value in this video. This is. E for error. So anyway, this is going to be the nth
derivative of our error function, which is going to be less
than or equal to this. Which is less than or equal to the
anti-derivative of M. Well, that’s a constant. So that’s going to be mx, mx. And since we’re just taking indefinite
integrals. We can’t forget the idea that we have a
constant over here. And in general, when you’re trying to
create an upper bound you want as low of an upper bound as
possible. So we wanna minimize, we wanna minimize
what this constant is. And lucky for us, we do have, we do know
what this, what this function, what value this
function takes on at a point. We know that the nth derivative of our
error function at a is equal to 0. I think we wrote it over here. The nth derivative at a is equal to 0. And that’s because the nth derivative of
the function and the approximation at a are going to be the
same exact thing. And so, if we evaluate both sides of this
at a, I’ll do that over here on the side, we know
that the absolute value. We know the absolute value of the nth
derivative at a, we know that this thing is going to be equal to
the absolute value of 0. Which is 0. Which needs to be less than or equal to
when you evaluate this thing at a, which is less than or equal to
m a plus c. And so you can, if you look at this part of the inequality, you subtract m a from
both sides. You get negative m a is less than or equal
to c. So our constant here, based on that little
condition that we were able to get in the last
video. Our constant is going to be greater than
or equal to negative ma. So if we want to minimize the constant, if
we wanna get this as low of a bound as possible, we would wanna
pick c is equal to negative Ma. That is the lowest possible c that will meet these constraints that we know are
true. So, we will actually pick c to be negative
Ma. And then we can rewrite this whole thing
as the absolute value of the nth derivative of
the error function. The nth derivative of the error function. Not the expected value. I have a strange suspicion I might have
said expected value. But, this is the error function. The nth der. The absolute value of the nth derivative
of the error function is less than or equal to M times x minus
a. And once again all of the constraints
hold. This is for, this is for x as part of the
interval. The closed interval between, the closed
interval between a and b. But looks like we’re making progress. We at least went from the m plus 1
derivative to the n derivative. Lets see if we can keep going. So same general idea. This if we know this then we know that we can take the integral of both sides of
this. So we can take the integral of both sides
of this the anti derivative of both sides. And we know from what we figured out up
here that something’s that’s even smaller than
this right over here. Is, is the absolute value of the integral
of the expected value. Now [LAUGH] see, I said it. Of the error function, not the expected
value. Of the error function. The nth derivative of the error function
of x. The nth derivative of the error function
of x dx. So we know that this is less than or equal
to based on the exact same logic there. And this is useful because this is just
going to be, this is just going to be the nth minus 1 derivative of
our error function of x. And of course we have the absolute value
outside of it. And now this is going to be less than or
equal to. It’s less than or equal to this, which is
less than or equal to this, which is less than or equal to
this right over here. The anti-derivative of this right over
here is going to be M times x minus a squared over 2. You could do U substitution if you want or
you could just say hey look. I have a little expression here, it’s
derivative is 1. So it’s implicitly there so I can just
treat it as kind of a U. So raise it to an exponent and then divide
that exponent. But once again I’m taking indefinite
integrals. So I’m going to say a plus C over here. But let’s use that same exact logic. If we evaluate this at A, you’re going to
have it. If you evaluate this while, let’s evaluate
both sides of this at A. the left side, evaluated at A, we know, is
going to be zero. We figured that out, all, up here in the
last video. So you get, I’m gonna do it on the right
over here. You get zero, when you valued the left
side of a. The right side of a, if you, the right
side of the value of a you get m times a menus a
square over 2. So you are gonna get 0 plus c, so you are
gonna get, 0 is less or equal to c. Once again we want to minimize our constant, we wanna minimize our upper boundary up
here. So we wanna pick the lowest possible c
that we talk constrains. So the lowest possible c that meets our
constraint is zero. And so the general idea here is that we
can keep doing this, we can keep doing exactly what we’re doing all
the way, all the way, all the way until. And so we keep integrating it at the exact
same, same way that I’ve done it all the way that we get and using
this exact same property here. All the way until we get, the bound on the
error function of x. So you could view this as the 0th
derivative. You know, we’re going all the way to the 0th derivative, which is really just the
error function. The bound on the error function of x is
going to be less than or equal to, and what’s it
going to be? And you can already see the pattern here. Is that it’s going to be m times x, minus
a. And the exponent, the one way to think
about it, this exponent plus this derivative is going to be equal
to n plus 1. Now this derivative is zero so this
exponent is going to be n plus 1. And whatever the exponent is, you’re going
to have,a nd maybe I should have done it, you’re going to have n plus
one factorial over here. And if say wait why, where does this n
plus 1 factorial come from? I just had a two here. Well think about what happens when we
integrate this again. You’re going to raise this to the third
power and then divide by three. So your denominator is going to have two
times three. Then when you integrate it again, you’re
going to raise it to the fourth power and then divide by
four. So then your denominator is going to be
two times three times four. Four factorial. So whatever power you’re raising to, the denominator is going to be that power
factorial. But what’s really interesting now is if we
are able to figure out that maximum value of
our function. If we’re able to figure out that maximum
value of our function right there. We now have a way of bounding our error
function over that interval, over that interval
between a and b. So for example, the error function at b. We can now bound it if we know what an m
is. We can say the error function at b is
going to be less than or equal to m times b minus a to the n plus 1th power over n
plus 1 factorial. So that gets us a really powerful, I guess
you could call it, result, kinda the, the math
behind it. And now we can show some examples where
this could actually be applied.

39 thoughts on “Taylor polynomial remainder (part 2) | Series | AP Calculus BC | Khan Academy

  • September 15, 2011 at 5:25 pm
    Permalink

    Beautiful!

    Reply
  • September 17, 2011 at 7:46 pm
    Permalink

    Thankyou very much, keep it up Khan Academy. I love the Pure Maths videos and the financial ones. In particular I never use anything else in the world to research economics or finance and i'm now the most enlightened person in my circle of friends on the subject :). Don't give it up.

    Reply
  • February 21, 2012 at 10:29 am
    Permalink

    You have no idea how much i appreciate this video…thank you so much!

    Reply
  • May 25, 2012 at 6:32 pm
    Permalink

    thanksssssssssssssssss God please thank u so much

    Reply
  • June 19, 2012 at 5:04 am
    Permalink

    Oh, so we are not only finding the error for a Taylor Polynomial modeling a function, like sinx. We are also finding the errors for the derivatives (or slope of the tangent lines) a Taylor Polynomial models for a function. Ah, I see why this error bouding is so important to the function as a whole.

    Reply
  • October 16, 2012 at 11:16 am
    Permalink

    Genius…………

    Reply
  • November 1, 2012 at 9:53 am
    Permalink

    Does anyone knows where the video where he does examples of the remainder function is???

    Reply
  • December 12, 2012 at 5:52 pm
    Permalink

    Wonderful video, gives a lot of intuition on the matter, but how do you bind the C at 10:10? I got a bit confused over this part, because you have no control over C at the point where you choose it to be -Ma, how do you prove that it is? Really nice video, but kinda crucial flaw in the proof :/

    Reply
  • April 22, 2013 at 4:11 pm
    Permalink

    Thank you!

    Reply
  • May 13, 2014 at 9:35 pm
    Permalink

    Your explanation, accompanied with the drawings, about the two different integrals is stunningly clear! Good job AGAIN!! =)

    Reply
  • October 22, 2014 at 12:32 pm
    Permalink

    You didnt explain why you replace M with f^(n+1)(x) at the very end.

    Reply
  • October 27, 2014 at 7:43 pm
    Permalink

    I, uh, found my new favourite source where to check all this stuff 😀 Excellent presentation.

    Reply
  • February 6, 2015 at 7:22 pm
    Permalink

    Even you was confused with E(x) as expected value of probability. I think that you should stay with R[n](x) as this is less confusing, especially for students that in the same time as Calculus 2 have Introduction to Statistics, straight after introduction to Probability, where all letter of different alphabet got special meaning. Never mind, thanks for the video.

    Reply
  • April 24, 2015 at 11:39 pm
    Permalink

    For me, it's better that you use M instead of max abs(f^(n+1)(x)) in [a,b] like in the book I have, because it clearly shows that it is a constant and not a function which got me very confused when I first learned this stuff.

    Reply
  • July 30, 2015 at 2:55 am
    Permalink

    Mind blowing..

    Reply
  • February 1, 2016 at 10:47 pm
    Permalink

    I've only watched the two videos of this concept and it's already so much better than others on YouTube

    Reply
  • February 1, 2016 at 10:50 pm
    Permalink

    Brilliant

    Reply
  • June 8, 2016 at 11:14 pm
    Permalink

    I just have an issue with interpreting what happened in the second integration of your error term towards the end of the video, do you mind clarifying my interpretation of what happened Khan? If you are treating 'a' and 'M' as a constant, then the integral of 'M(x-a)' with respect to x, where x is the variable, then the expression won't be 'M/2 * (x-a)^2'. This is evident when you expand the expression before integrating, 'Mx – Ma'. '-Ma' is a constant so that integrated becomes '-aMx' (I just rearranged the order of the constants, because '-Max' might be confusing). Then 'Mx' integrated becomes 'M/2 * x^2'. So when you simplify the expression now: 'Mx(x-2a)/2'.

    I do however understand that it should be intuitive that the result of the Error Term would follow the general trend of the following polynomial term; but I am however stuck with interpreting that second integral. Thank you for in advance Khan.

    Reply
  • June 20, 2016 at 11:56 am
    Permalink

    This is life changing!

    Reply
  • July 18, 2016 at 4:33 am
    Permalink

    You are really doing a great job!

    Reply
  • August 8, 2016 at 6:45 am
    Permalink

    11:40 How?

    Reply
  • September 5, 2016 at 10:17 pm
    Permalink

    where are the examples i cant find them

    Reply
  • October 9, 2016 at 12:39 am
    Permalink

    Why is the integral of E^(n+1) = E^n ?

    Reply
  • February 5, 2017 at 1:18 am
    Permalink

    this is so hard man

    Reply
  • June 15, 2017 at 4:53 am
    Permalink

    it's not the expected value function. Oh, did I mention it's not the expected value function? Oh btw it's not the expected value function

    Reply
  • July 30, 2017 at 9:03 pm
    Permalink

    There's something I don't understand. From 6:22 to 7:33, he claims that the absolute value of the integral of some function is LESS THAN OR EQUAL TO the integral of some function in absolute value. How is this possible? We are trying to find the area between a curve and the x-axis. If you are taking the absolute value of the integral, you will ALWAYS get a positive value, because the negative would be negated. If you are taking the absolute value first, the integral can end up negative, and then we would have negative area. Take for example, f(x) = 2x. The integral of 4x^(3) is x^4, and bounded between x = -3 and x = -1, the area bounded between there and the x-axis is -80… the absolute value is then taken, to give you 80. Meanwhile, if you took the absolute value of 4x^(3), you would get 4x^(3)… then the integral of that would give you a negative value. Therefore, it seems to me as if the absolute value on the outside is LARGER than inside.

    Reply
  • August 8, 2017 at 3:14 pm
    Permalink

    Amazing. Understood it. There is however another way (which is straightforward) to derive this error bound that I found in the following link:
    https://brilliant.org/wiki/taylor-series-error-bounds/

    Is that really valid? There are statements that don't seem to be correct there. I can't understand it.
    For example, "Since the Taylor approximation becomes more accurate as more terms are included, the Pn+1(x) polynomial must be more accurate than Pn(x)" does not seem to be right.

    Reply
  • March 31, 2018 at 11:27 am
    Permalink

    So now we see, that there was another assumption about "arbitrary function f" – it's n+1 derivative must be continuous.

    Reply
  • May 4, 2018 at 9:35 pm
    Permalink

    I want a graphical proof/demonstration

    Reply
  • August 1, 2018 at 2:33 pm
    Permalink

    the best and the perfect

    Reply
  • August 3, 2018 at 3:35 am
    Permalink

    Why isn't (n+1)th derivative of f(x) 0?

    Reply
  • February 17, 2019 at 6:59 pm
    Permalink

    Sir, is the order of error function related to this? Could we say the order of the error function is (n+1)?

    Reply
  • April 12, 2019 at 6:00 pm
    Permalink

    What if I have a function that doesn't satisfy that requirement abs(integral(f(x))) <=integral(abs(f(x)))
    For example, y=x^(-2), for all x belongs to N

    Reply
  • August 28, 2019 at 3:28 am
    Permalink

    It might have been much easier and clearer if they used notation of Max instead of just M. M already is too well known for Middle value.

    Reply
  • October 18, 2019 at 1:45 pm
    Permalink

    What happened to the c (constant of integration)???

    Reply
  • November 6, 2019 at 1:51 am
    Permalink

    Remember when math used to be numbers

    Reply
  • November 17, 2019 at 5:16 am
    Permalink

    Thank you 😊

    Reply
  • December 22, 2019 at 2:01 pm
    Permalink

    At 08:10, why isn't a constand needed when taking indefinite integral of the (n+1)th derivative of error function?

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *