Chapter 9 Infinite Series

Author

Jiaye Xu

Published

February 27, 2026

9.1 Infinite Sequences

An infinite sequence \(\{a_n\}\) is a function whose domain is the set of positive integers and whose range is a set of real numbers.

Convergence and Divergence

For sequences of variable sign, it is helpful to have the following result.

Monotonic Sequences

The expression monotonic sequence is used to describe either a nondecreasing or nonincreasing sequence.

N.B. The convergence or divergence of a sequence does not depend on the character of the initial terms, but rather on what is true for large \(n\).

9.2 Infinite Series

An infinite series is the sum of an infinite sequence of numbers

\[ a_1+a_2+\ldots+a_n+\ldots \]

The goal of this section is to understand the meaning of such an infinite sum and to develop methods to calculate it.

The sum of the first \(n\) terms, called the \(n\)th partial sum, \[ S_n=a_1+a_2+\ldots+a_n=\sum_{k=1}^n a_k \] is an finite sum calculated by ordinary addition.

For example, for the series

\[ \frac{1}{2}+\frac{1}{4}+\frac{1}{8}+\ldots \]

the partial sums are

The infinite sum is then defined to be the limit of the partial sum \(S_n\), in this example,
\[\lim_{n\to \infty}S_n=\lim_{n\to\infty}\left(1-\frac{1}{2^n}\right)=1\]

Geometric Series

where \(a \neq 0\), is called a geometric series.

Comment: Part (b) shows that any repeating decimal represents a rational number.

Suppose for convenience that the large circle has radius \(1\), thus the triangle have sides of length \(2\sqrt 3\).

Geometric reasoning: the center of the large circle is two-thirds of the way from the upper vertex to the base.

The vertical stack has area

The probability that Peter wins is the sum of the geometric series

A General Test for Divergence

N.B. The second statement is the contrapositive of the first statement. Recall that the contrapositive of a statement is true whenever the statement is true. However, the converse of a true statement could be false.

The Harmonic Series – a divergent series

N.B. The converse of the first statement in Theorem A is NOT necessarily true. For example, the harmonic series

Clearly, \(\lim_{n\to \infty}=\lim_{n\to\infty}\frac{1}{n}=0\), however, the series diverges.

Collapsing Series

Properties of Convergent Series

Convergent series behave much like finite sums; what you expect to be true often is true.

Intuitive Interpretations:

  • For a finite sum, the associative law of addition allows us to group terms in any way that we wish. For example,

  • For an infinite sum, grouping terms of a sequence of partial sums that diverges will lead us into a paradox. Fore example, the series

    It turns out that grouping of terms in a series is acceptable provided that the series is convergent; in such a case we can group terms in any way that we wish.

9.3 Positive Series: The Integral Test

In this and the next section, we restrict our attention to series with positive (or at least nonnegative) terms.

Bounded Partial Sums

Our first result flows directly from the Monotonic Sequence Theorem,

Series and Improper Integrals

The behavior of \(\sum_{k=1}^\infty f(k)\) and \(\int_1^\infty f(x)\) with respect to convergence is similar and gives a very powerful test:

Integral Test

N.B. The integer \(1\) may be replaced by any positive integer \(M\) throughout this theorem. In testing for convergence or divergence of a series, we can ignore the beginning terms or even change them.

N.B. The equivalent statement: The series \(\sum_{k=1}^\infty f(k)\) and the improper integral \(\int_1^\infty f(x)\) converge or diverge together.

proof.

\(p\)-Series Test

Approximating the Sum of a Series

If we use the \(n\)th partial sum to approximate the sum of the series \(S\), we make an error \(E_n=S-S_n\). Then the following question is how to find an upper bound on this error.

9.4 Positive Series: Other Tests

We are still considering series whose terms are positive (or at least nonnegative).

Comparing One Series with Another

Ordinary Comparison Test

Equivalent Statements:

  • A series with terms less than the corresponding terms of a convergent series converges;

  • A series with terms greater than the corresponding terms of a divergent series diverges.

Limit Comparison Test

Comparing a Series with Itself – Ratio Test

Wouldn’t it be nice if we could somehow compare a series with itself and thereby determine convergence or divergence?

N.B. For a series whose \(n\)th term involves \(n!\), \(r^n\) or \(n^n\), the Ratio Test usually works beautifully.

Summary

To test a positive series for convergence or divergence,

9.5 Alternating Series, Absolute Convergence, and Conditional Convergence

If we allow some terms to be negative, we have alternating series, that is, series of the form \[a_1-a_2+a_3-a_4+\ldots\] where \(a_n>0\) for all \(n\).

Example: the alternating harmonic series \[1-\frac{1}{2}+\frac{1}{3}-\frac{1}{4}+\ldots\]

N.B. The harmonic series diverges, however the alternating harmonic series converges.

A Convergence Test for Alternating Series

Comment: The following is an equivalent statement of Theorem A in the textbook.

The Alternating Series Test

The series \[\sum_{n=1}^\infty (-1)^{n+1} a_n=a_1-a_2+a_3-a_4+\ldots\] converges if the following conditions are satisfied:

  1. The \(a_n\)’s are all positive.
  2. The \(a_n\)’s are eventually nonincreasing: \(a_n \geq a_{n+1}\) for all \(n \geq N\), for some integer \(N\).
  3. \(a_n\to 0\).

Illustrations:

Suppose the sequence \(\{a_n\}\) is decreasing. Thus, for the alternating series we have

and so on.

Note that

  • the even-numbered terms \(S_2, S_4,S_6,\ldots\) are increasing and bounded above and hence must converge to a limit, call it \(S^\prime\);

  • the odd-numbered terms \(S_1, S_3,S_5,\ldots\) are decreasing and bounded below and also converge to a limit, call it \(S^{\prime\prime}\).

Both \(S^\prime\) and \(S^{\prime\prime}\) are between \(S_n\) and \(S_{n+1}\) for all \(n\), and so

Thus, the condition \(a_{n+1}\to 0\) as \(n \to \infty\) will guarantee that

  • \(S^\prime=S^{\prime\prime}\), and

  • the convergence of the series to their common value, which we call \(S\).

Absolute Convergence

N.B. Theorem B asserts that absolute convergence implies convergence.

N.B. All our tests for convergence of series of positive terms are automatically tests for the absolute convergence of a series in which some terms are negative.

Absolute ratio test

Conditional Convergence

For example, the alternating harmonic series is conditionally convergent, or converges conditionally.

N.B. The result is extended to the alternating p-series.

If \(p\) is a positive constant, the sequence \(\{1/n^p\}\) is a decreasing sequence with limit zero. Therefore, the alternating \(p\)-series

converges.

  • If \(p>1\), the series converges absolutely as an ordinary \(p\)-series.

  • If \(0<p<1\), the series converges conditionally by the alternating series test.

Comment:

Absolutely convergent series behave much better than do conditionally convergent ones. The following is a nice theorem about absolutely convergent series. (It is not true for conditionally convergent series.)

9.6 Power Series

Series of constants vs. Series of functions

So far we have been studying what might be called series of constants, or series of numbers, \(\sum u_n\). In this section, we consider series of functions, \(\sum u_n(x)\).

There are two important questions to ask about a series of functions:

  1. For what \(x\)’s does the series converge?
  2. To what function does it converge; that is, what is the sum \(S(x)\) of the series?

A power series in \(x\) has the form

The Convergence Set

We call the set on which a power series converges its convergence set.

N.B. The convergence set was an interval (or a degenerate interval in the last example).This will always be the case. It is impossible for a power series to have a convergence set consisting of two disconnected parts.

N.B. It might converge absolutely at the end points of the interval of convergence, which however is not guaranteed. (see Example 2)

Power Series in \(x-a\)

A series of the form

is called a power series in \(x-a\).

N.B. All that we have said about power series in \(X\) applies equally well for series in \(x-a\), and its convergence set is one of the following kinds of intervals:

  1. The single point \(x=a\);
  2. An interval \((a-R,a+R)\) plus possibly one or both end points (Figure 5);
  3. The whole real line.

9.7 Operations on Power Series

Now think of a power series as a polynomial with infinitely many terms. It behaves like a polynomial under both differentiation and integration performed term by term as following:

Term-by-Term Differentiation and Integration

The interpretations of Theorem A:

  1. \(S\) is both differentiable and integrable;

  2. the derivative and integral may be calculated term-by-term;

  3. the radius of convergence of both the differentiated and integrated series is the same as for the original series.

A nice consequence of Theorem A is that we can apply it to a power series with a known sum formula to obtain sum formulas for other series:

Find the Sum Function for Power Series

N.B. Two important resulting series.

Algebraic Operations: Addition and Subtraction

Convergent power series behave like polynomials, thus can be:

  • added and subtracted term by term;

  • multiplied and divided in a manner of the multiplication and the long-division of polynomials.

  • Step 1: write out the specific from of the series.

  • Step 2: Multiply in a manner of the multiplication of polynomials.

  • Step 3: Divide in a manner of long-division of polynomials.

9.8 Taylor and Maclaurin Series

A main question: Given a function \(f\), can we represent it as a power series in \(x\), or more generally, in \(x-a\)? More specifically, can we find numbers \(c_0,c_1,c_2, c_3, \ldots\) such that

for \(x\) around \(a\)?

By Theorem 9.7A,

Substituting \(x=a\) and solve for \(c_n\), we obtain

Uniqueness Theorem

N.B. a function cannot be represented by by two different power series in \(x-a\).

  • The power series representation of a function in \(x-a\) is called its Taylor series.

  • When \(a=0\), the corresponding series is called the Maclaurin series.

Convergence of Taylor Series

N.B. This theorem tells us what the error can be when we approximate a function with a finite number of terms of its Taylor series.

(Proof is optional.)

For the special case of \(n=4\),

Then, define a new function

where \(x\) and \(a\) as constants. Clearly, \(g(x)=0\) and

Taking the derivative of \(g\),

We now—finally—answer the question about whether a function \(f\) can be represented by a power series in \(x - a\).

N.B. When \(a=0\), we obtain the Maclaurin series

Sum Function for Power Series

Binomial Series

Summary of Maclaurin Series

9.9 The Taylor Approximation to a Function

Cutting off the series after a finite number of terms, leads to a polynomial that we can use to approximate a function. Such polynomials are called Taylor or Maclaurin polynomials.

Taylor Polynomials

Taylor Polynomial of Order 1

In the formula of linear approximation to function \(f\) near a point \(a\), \[P_1(x)=f(a)+f^\prime(a)(x-a)\] \(P_1\) is called the Taylor polynomial of order \(1\) based at \(a\).

Comments: The approximation is much better for \(\ln 0.9\) than for \(\ln 1.5\), since \(0.9\) is closer to \(1\) than is \(1.5\). (The \(4\)-digit true value of \(\ln 0.9\) and \(\ln 1.5\) are \(-0.1054\) and \(0.4055\).)

The Taylor polynomial of order \(\mathbf n\) based at \(a\) is

Maclaurin Polynomials

When the Taylor polynomial of order \(n\) simplifies to the Maclaurin polynomial of order \(n\), which gives an approximation near \(x=0\):

Visualization of the approximation to \(\cos x\) by the Maclaurin polynomials \(P_1(X)\), \(P_5(X)\) and \(P_8(X)\):

Error in the Method

Two kinds of errors that occur in approximation processes:

  • the error of the method, that comes from the polynomial instead of evaluating the exact sum of the series;

  • the error of calculation, that includes errors due to rounding.

Comments: We can reduce the error of the method by using polynomials of higher order, however, using polynomials of higher order means more calculations and potentially increases the error of calculation.

In Section 9.8, we knew a formula for the error of approximating a function by its Taylor polynomial:

The error, also called the remainder, \(R_n(x)\) is given by

where \(c\) is some real number between \(a\) and \(x\).

Comments: One problem is that we do not know what \(c\) is. For most problems we must settle for a bound on the remainder using the known bounds on \(c\).

Comments:

  • Our goal is find the maximum possible value of \(\lvert R_n\rvert\) for \(c\) in the given interval. (Since we do not know \(c\), the precise value of \(R_n\) is almost never obtainable.)

  • In finding a good upper bound for \(\lvert R_n\rvert\), a frequently used tool is the triangle inequality, \(\lvert a \pm b \rvert\leq \lvert a \rvert +\lvert b \rvert\), to make its numerator larger or its denominator smaller.