Taylor Series
- Big O Notation
- Connection with FTC
- Methods for Finding Taylor Polynomials
- Taylor Series
- Taylor's Remainder Theorem
We can approximate non-polynomials using polynomials, and polynomials are much easier to work with.
Maclaurin's Approach
Example: Approximating cosine
How would we find a quadratic polynomial in the form
left=-6; right=6;
top=2; bottom=-2;
---
y = \cos(x)
y = 1 - \frac{1}{2}x^2
We need to find the constants
First, at since
Now, we want the polynomial to have the same tangent slope as
left=-6; right=6;
top=2; bottom=-2;
---
y = \cos(x)
y = 1 - \frac{1}{2}x^2
y = 1 \{-\pi < x < \pi\}
Differentiating our equations:
so our derivative must be flat (0) at
and since
This pattern continues, and we take the second derivative as well for the
and now we have to match the second derivative of the polynomial as well:
and since
As you might imagine, as the degree of the polynomial increases, we have to higher order derivatives, and we can get a more accurate approximation for a larger domain.
What if we try a cubic? Then we have:
so
As for a quartic function,
and
so we have
which is an even better approximation:
left=-6; right=6;
top=2; bottom=-2;
---
y = \cos(x)
y = 1 - \frac{1}{2}x^2 + \frac{1}{24}x^4
Notice a few things:
- Successive uses of the power rule lead to factorials, so
would be - Adding a new term doesn't mess up what the previous terms should be
If we're going to approximate near
Recall using the linear approximation to approximate functions. We ask if we can do better; can we use a tangent polynomial of some higher degree to get an even better approximation? In fact, the tangent approximation is just a Taylor Polynomial with
Maclaurin Polynomial or Taylor Polynomial at x = 0
To approximate a function
(see differentiation, factorial)
Or:
The constant term ensures the value of the polynomial matches the value of
Taylor Polynomial
To approximate a function
Or:
"Maclaurin Polynomial" or "Maclaurin Series" is just a Taylor Polynomial/Series "centred" at
So a Maclaurin series is a special case of a Taylor series, and a Taylor series is a generalization of a Maclaurin series. You may see "Maclaurin series" used quite often from other people, but I will just refer to them as "Taylor polynomials at
Taylor Series and Maclaurin Series
What if we kept adding terms infinitely? Then we have an infinite series, and we would get an ever more accurate approximation. That is, a Taylor/Maclaurin series is simply a Taylor/Maclaurin polynomial with
Convergence and Divergence
We therefore can say that the sequence converges to some value. Then you would say that the infinite sum is equal to the value that it's converging to.
However, sometimes we can choose a point, and no matter how many derivatives we use, we will never converge to that value. If this occurs, we say the series diverges.
When a Taylor series is convergent, then the Taylor series is equal to the function it is derived from.
More details at Convergence and Divergence.
proof
None, it is impossible. In fact, this isn't necessarily true. Consider this piecewise function from a famous example:
Using the definition of the derivative, we see that for all
Well, that's disturbing.
- Prof. David Harmsworth
However, in practice, we are very unlikely to encounter this.
Uniqueness
Taylor Series are unique for each
Furthermore, if a polynomial matches the values of
Examples
Find the Taylor Polynomial of
solution
Strange property: if we infinitely add, we converge to a number.
top=30; bottom=-2;
left=-15; right=10;
---
y = e^x
y = 1 + x + x^2/2! + x^3/3! + x^4/4!
Find the Taylor Polynomial of
solution
Notice:
Which gives:
top=2; bottom=-2;
left=-10; right = 10;
---
y = \sin(x)
y = x - x^3/3! + x^5/5! - x^7/7!
Find the Taylor Polynomial of
solution
Listing out the derivatives again:
Which gives:
top=2; bottom=-2;
left=-10; right = 10;
---
y = \cos(x)
y = 1 - x^2/2! + x^4/4! - x^6/6!
Euler and complex numbers:
Euler noticed that this follows the same pattern as the derivatives of sine/cosine (i.e as you take more derivatives, it "cycles" through).
which is Euler's Formula!
Note that we got the values for
Find the Taylor Polynomial of
where
solution
In the Taylor polynomial:
Using this with
left=-5; right=10;
bottom=-5; top=5;
---
y = \ln(1 + x)
y = x - x^2/2 + x^3/3 + x^4/4 + x^5/5 | #388c46
y = x - x^2/2 + x^3/3 + x^4/4 + x^5/5 + x^6/6 + x^7/7 + x^8/8 + x^9/9 + x^{10}/{10} | #6042a6
y = x - x^2/2 + x^3/3 + x^4/4 + x^5/5 + x^6/6 + x^7/7 + x^8/8 + x^9/9 + x^{10}/{10} + x^{11}/{11} + x^{12}/{12} + x^{13}/{13} + x^{14}/{14} + x^{15}/{15} + x^{16}/{16} + x^{17}/{17} + x^{18}/{18} + x^{19}/{19} + x^{20}/{20} | #fa7e19
(4, -2) | label: 1 - x^2/2 + ... + x^5/5 | #388c46 | cross
(4, -3) | label: 1 - x^2/2 + ... + x^{10}/{10} | #6042a6 | cross
(4, -4) | label: 1 - x^2/2 + ... + x^{20}/{20}| #fa7e19 | cross
This Taylor series diverges, and does not approach
Historically, this is is what Newton did before Taylor, using the function:
That is, we can add up an infinite number of numbers, and we will end with a number.
Which is Bananas
- Prof. Scott
Note this only works for
E.g
Which comes from Infinite Series#Geometric series
Important Taylor Series to Remember
All of these have been derived in the #Examples section
Geometric Series
for
Natural Number e
for
Natural Log (1 - x)
Integral of geometric series
for
Cosine
for
Sine
Cosine with shift
for
Binomial Theorem
for
Two-Variable Taylor Series
Consider a function
For two variables, we will instead freeze one variable (say
Now, the terms
If we substitute each of these into (1), we get an infinite series of an infinite series. But we can re-organize these terms so it's just one series:
The first two lines are actually the tangent plane approximation formula.
After some factoring and adding a more terms, we arrive at the formula for a two-variable Taylor polynomial:
To approximate a function
where
These terms all show a pattern: the structure of a single-variable Taylor series is retained, but all partial derivatives must appear.
Moreover, the coefficients come from Pascal's triangle.
Why? Because the term