We often take it for granted that common functions such as the expotential or the
trigonometric functions exist and use them along with their properties. But where do
they come from? How are they defined and why? Let us start with the simplest class
of functions from $\mathbb{C}$ to $\mathbb{C}$ or $\mathbb{R}$ to $\mathbb{R}$ that
we can think of. Arguably this is the class of polynomial functions (which can be
evaluated by a finite number of simple operations) such as
$$
p:\mathbb{C}\to \mathbb{C},\: x\mapsto \sum_{k=0}^n a_kx^k,
$$
where $n\in \mathbb{N}$ and the coefficients $a_k$ are complex numbers. We shall
consider the real case here and therefore assume that $a_k\in \mathbb{R}$ for
$k=1,\dots,n$ and restrict the polyomial's domain of definition to $\mathbb{R}$.
Unfortunately polynomials do not suffice in order to successfully deal with many
important and basic problems. One of the simplest growth models stipulates that
populations grow at a rate which is proportional to their size. If
$x:[0,\infty)\to[0,\infty)$ is the function that tracks population size over time,
then mathematically this can be stated as
$$
\dot x(t)=\alpha\, x(t),\: t>0,
$$
for a constant of proportionality $\alpha>0$ (which we set without loss of generality
to $1$) and some observed initial population size $x_0\geq 0$ at time $t=0$.
With only the simplest functions in our arsenal we attempt to find a solution in the
form
$$
x=\sum_{k=0}^na_kt^k
$$
and try to determine both $n$ and the coefficients $a_k$. Validity of the equation
requires
$$
\sum _{k=1}^{n}ka_kt^{k-1}=\sum _{k=0}^na_kt^k,\text{ recalling that
}\alpha =1.
$$
Observing that
$$
\sum _{k=1}^{n}ka_kt^{k-1}=\sum_{k=0}^{n-1}(k+1)a_{k+1}t^k,
$$
we infer that $(k+1)a_{k+1}=a_k$ is a reasonable choice towards satisfying the
equation. Indeed all terms but one in the equation cancel. Using the knowledge that
$x(0)=a_0=x_0$ determines $a_0$. Thus
$$
a_k=\frac{a_{k-1}}{k}=\frac{a_{k-2}}{k(k-1)}\dots=\frac{a_0}{k!}=\frac{x_0}{k!}.
$$
Define $x_n(t)=x_0\sum_{k=0}^n \frac{t^k}{k!}$ and observe that
$$
|\dot x_n(t)-x_n(t)|=|x_0\frac{t^n}{n!}|=|x_0|\frac{|t|^n}{n!}.
$$
While this shows that $x_n$ is not a solution, if we take $0< T< \infty$ and only
consider $t\in[-T,T]$, we see that this error
$$
|x_0|\frac{|t|^n}{n!}\leq |x_0|\frac{|T|^n}{n!}\to 0\text{ as }n\to\infty,
$$
independently of $t\in[-T,T]$ and for any fixed $x_0$. This seems to indicate that
$$
x_\infty(t)=x_0\sum_{k=0}^\infty \frac{t^k}{k!}=x_0\lim_{n\to\infty}\sum_{k=0}^n
\frac{t^k}{k!}
$$
will actually solve the problem (at least on $[-T,T]$ and hence
everywhere since $T$ is arbitrary), provided the series
converges. At a purely formal level, this can easily be verified by
taking a derivative term by term in the series. We shall see that this
is indeed justified later in this Lecture. This series is
taken as the definition of the exponential function
$$
e^t=\lim_{n\to\infty}\sum_{k=0}^n\frac{t^k}{k!}.
$$
While this is a very simple example, it clearly points to the necessity of
considering and understanding series and their convergence behavior.
Definition and Convergence Tests
Definition (Series)
With the symbol $\sum_{n=1}^\infty a_n$ we denote the sequence
$(s_n)_{n\in\mathbb{N}}$ of partial sums $s_n=\sum_{k=1}^n a_k$ and
call it series. We say that it converges to a finite
number $s\in\mathbb{R}$ if the sequence of partial sums converges to
it, i.e. if
$$
s=\lim_{n \to \infty} s_n
$$
If a series does not converge, we say it diverges.
Question
How can we test whether the series $\sum_{n=1}^\infty a_n$ converges or diverges?
Proposition (Necessary Condition for Convergence)
If the real series $\sum_{n=1}^\infty a_n$ converges, then $a_n \to 0$ as $n \to \infty$.
Since $\lim_{n\to\infty} \dfrac{(-1)^n + n}{n+100} = 1\ne 0$, the series has to
diverge.
Definition (Absolute Convergence)
We say that $\sum_{n=1}^\infty a_n$ converges absolutely if
$\sum_{n=1}^\infty |a_n|$ converges.
Theorem (Cauchy Criterion)
The real (or complex) series $\sum_{n=1}^\infty a_n$ converges if and only if, for
each $\epsilon>0$ there is $N\in \mathbb{N}$ such that
$$
| \sum_{k=n}^m a_k |\leq\epsilon\text{ whenever }m\geq n \ge N\, .
$$
This follows from the completeness of $\mathbb{C}$ since the conditions amounts to
the Cauchy property of the sequence $(s_n)_{n\in\mathbb{N}}$ of partial sums.
Corollary
If the real (or complex) series $\sum_{n=1}^\infty a_n$ converges absolutely, then
$\sum_{n=1}^\infty a_n$ converges. The converse is not true in general.
The first part is a consequence of the Cauchy test along with
$$
|\sum_{k=n}^m a_k| \le \sum_{k=n}^m |a_k|\, .
$$
The second follows from the fact that
$$
\sum_{n=1}^\infty \frac{(-1)^n}{n}
$$
converges but does not converge absolutely.
Series with Nonnegative Terms
Theorem (Comparison Test)
Assume that $0 \le b_n \le a_n \le c_n$ for $n\in \mathbb{N}$. Then
(i) If $\sum_{n=1}^\infty c_n$ converges, then
$\sum_{n=1}^\infty a_n$ converges.
(ii) $\text{If }\sum_{n=1}^\infty b_n$ diverges, then
$\sum_{n=1}^\infty a_n$ diverges.
Theorem (Comparison Test)
The series
$
\sum_{n=1}^{\infty} \dfrac{1}{n^p}
$
converges if $p>1$ and diverges (to $+\infty$) if $p \le 1$.
Since
$$
0 \le \dfrac{(-1)^n + n^2 }{n^4+10n+1} \le \dfrac{2 n^2}{n^4} \le
\frac{2}{n^2}.
$$
and since $\sum_{n=1}^\infty \dfrac{1}{n^2}$ converges (by the
previous theorem with $p=2$), the comparison test yields convergence.
Theorem (Integral Test)
Assume that $f:[1,\infty)\to \mathbb{R}$ is a decreasing non-negative function. Then
(i) $0\le a_n \le f(n)$ and $\int_{1}^\infty f(x) dx < + \infty$ imply that
$\sum_{n=1}^\infty a_n$ converges.
(ii) $a_n\ge f(n)$ and $\int_{1}^\infty f(x) dx =+ \infty$ imply that
$\sum_{n=1}^\infty a_n$ diverges.
First notice that
$$
\frac{1}{1+x}\leq\frac{1}{x}=\frac{1+x}{x}\frac{1}{1+x}\leq 2 \frac{1}{1+x}\, ,\: x\geq 1\, .
$$
It follows by comparison that it is enough to consider the convergence of
$$
\sum_{n=1}^\infty \dfrac{1}{(n+1)\log ^{\, p}(n+1)}.
$$
Let $f_p(x)=\frac{1}{(x+1)\log ^{\, p} (x+1)}$. If $p\ne 1$, then
$$
\int_1^\infty f_p(x) d x=\frac{1}{1-p}\log ^{-p+1}(x+1)\Big|_{x=1}^\infty
$$
This is finite when $p>1$, infinite when $p\le 1$. The result therefore follows from
integral test as $a_n=f(n)$.
Theorem (Ratio Test)
Let $a_n >0$, $n=1,2,\dots$ and assume that
$\lim_{n \to \infty} \dfrac{a_{n+1}}{a_n} = r$. Then
(i) If $r<1$, $\sum_{n=1}^\infty a_n$ converges.
(b) Let $a_n=\frac{1}{n}$ and $b_n=\frac{1}{n^2}$. Then
$\lim_{n\to\infty} \frac{a_{n+1}}{a_n}=\frac{b_{n+1}}{b_n}=1$, but
$\sum_{n=1}^\infty a_n$ diverges, while $\sum_{n=1}^\infty b_n$
converges.
(a) Notice that
$$
a_n = \dfrac{2^n}{n!} >0\text{ and that }\dfrac{a_{n+1}}{a_n} =
\dfrac{2^{n+1}}{(n+1)!}\dfrac{n!}{2^n} = \dfrac{2}{n+1}\to 0\text{ as }n \to \infty
$$
implies convergence via the ratio test.
Theorem (Root Test)
If $\lim\sup_{n \to \infty} \sqrt[n]{|a_n|} = r$, then
(i) if $r<1$, $\sum_{n=1}^\infty |a_n|$ converges.
(ii) If $r>1$, $\sum_{n=1}^\infty a_n$ diverges.
(iii) For $r=1$ the test fails.
Example
Find the interval (for $x$) on which $\sum_{n=1}^\infty
\dfrac{n^2}{8^n}x^{3n}$ converges.
Let $a_n = \dfrac{n^2}{8^n}x^{3n}$. Then
$$
\sqrt[n]{|a_n|} = \sqrt[n]{\dfrac{n^2}{8^n}} |x|^3 = \dfrac{|x|^3}{8}
\sqrt[n]{n^2}\to\dfrac{|x|^3}{8}<1\text{, when }|x|<2.
$$
Therefore, the series converges for all $x\in (-2, 2)$.
General Series
We now have criteria to determine whether a series with non-negative terms
converges. The analysis of convergence of general series $\sum_{n=1}^\infty a_n$ is
more involved because of cancellation effects between terms. The following identity
is, however, a very useful tool.
Theorem (Abel's Summation Formula)
It holds that
$$
\sum_{k=p}^q a_k b_k = \sum_{k=p}^q s_k (b_k - b_{k+1}) + s_q b_{q+1}
- s_{p-1} b_p
$$
where $s_k = \sum_{j=1}^k a_j$ for $k\geq 1$ and $s_0:=0$.
Apply Abel's formula to $a_n = (-1)^n$ and
$
s_n=
\begin{cases}
0\, ,&\text{ if }n\text{ is even}\\
-1\, ,&\text{ if }n\text{ is odd}
\end{cases}\,
$
and use the theorem formulated below.
Theorem
Let $(a_k)_{k\in\mathbb{N}}$ and $(b_k)_{k\in\mathbb{N}}$ be two
sequences of numbers such that
(i) $\bigl(s_n = \sum_{k=1}^n a_k\bigr)_{n\in \mathbb{N} }$ is bounded.
(ii) $b_n \ge b_{n+1}$ for $n \ge 1$ and $b_n \to 0$ as $n \to \infty$.
Given $\epsilon>0$, we need to find $N$ such that
$$
| \sum_{k=n}^m a_k b_k| < \epsilon\text{ whenever }m,n\geq N\, .
$$
Since $(s_n)_{n\in\mathbb{N}}$ is bounded, there is $M>0$ with $|s_n| \le M$ for all $n \ge 1$. It
follows that
\begin{multline*}
| \sum_{k=n}^m a_k b_k | = | \sum_{k=n}^m s_k (b_k - b_{k+1}) +
s_mb_{m+1} - s_{n-1}b_n | \\
\le \sum_{k=n}^m |s_k| (b_k - b_{k+1}) + (|s_m| + |s_n|)(|b_{m+1}|+ |b_n|)\\
\le M (b_n - b_{m+1}) + 2 M (|b_n| + |b_{m+1}|) \le 3 M (|b_n| + |b_{m+1}|).
\end{multline*}
Now $\lim_{n\to \infty} b_n =0$ and there exists $N$ such that
$$
|b_n|\le\frac{\epsilon}{6M}\text{ for }n\geq N
$$
We conclude that
$$
|\sum_{k=n}^m a_k b_k| \le 6M\frac{\epsilon}{6M} = \epsilon\text{ for
}m\geq n \ge N\, .
$$
The claim now follows by means of Cauchy's criterion.
Since $\ln (n+2) \ge \ln (n+1)$ for $n\in \mathbb{N}$, we have that
\begin{equation*}
\dfrac{1}{\ln (n+2)} \le \dfrac{1}{\ln (n+1)}\text{ for }n\in
\mathbb{N}.
\end{equation*}
Thus convergence follows from the alternating series test and the fact
that
$$
\lim_{n \to \infty}\dfrac{1}{\ln (n+1)} = 0\, .
$$
Power Series
Definition (Power Series and Radius of
Convergence)
A series of the form $\sum_{n=0}^\infty a_n x^n$ for a sequence
$(a_n)_{n\in\mathbb{N}}$ and the variable $x\in \mathbb{R}$ (or, more
in general, $x\in\mathbb{C}$) is called
power series. Its radius of
convergence is defined by
$$
R = \dfrac{1}{\limsup_{n \to \infty} \sqrt[n]{|a_n|}}.
$$
Theorem
The following statements hold
(i) $\sum_{n=0}^\infty a_n x^n$ converges absolutely for $x\in
(-R,R)$.
(ii) For any $r< R$, $\sum_{n=0}^\infty a_n x^n$ converges absolutely
and uniformly for $x\in [-r,r]$.
(iii) Let $r < R$ and define $f(x):=\sum_{n=0}^\infty a_n x^n$. Then $f$
is differentiable on $(-R,R)$ and integrable on $[-r, r]$ and
$$
f'(x)=\sum_{n=1}^\infty n a_n x^{n-1},\quad x\in (-R, R)
$$
and
$$
\int_0^x f(x) dx=\sum_{n=0}^\infty \frac{a_n}{n+1} x^{n+1}, \quad x\in
(-R, R).
$$
(iv) When $x=R$ and $x=-R$ nothing can be said in general about
convergence.
Remark
When $x$ is allowed to be complex in the above theorem, then the
interval $[-r,r]$ for $r\in[0,R]$, has to be replaces by the ball
$[|x|\leq r]=\{ x\in \mathbb{C}\, :\, |x|\leq r\}$.
Example
(a) Determine the region of convergence of $\sum_{n=1}^\infty
2^{-n} x^{n^2}$.
(b) Determine where $\sum_{n=1}^\infty \dfrac{(-1)^n}{\sqrt{n}
2^n} x^{n}$ converges.
(a) We compute the radius of convergence via
$$
R = \dfrac{1}{\limsup_{n \to \infty} \sqrt[n^2] { 2^{-n}}} = 1.
$$
At the end points $x=\pm 1$ we have convergence since
$$
\sum_{n=1}^n 2^{-n}\text{ and }\sum_{n=1}^\infty 2^{-n} (-1)^{n^2} =
\sum_{n=1}^\infty 2^{-n} (-1)^n
$$
do converge.
(b) Since
$$
R = \dfrac{1}{\limsup_{n \to \infty} \sqrt[n] { \frac{1}{\sqrt{n}2^n}}}=2
$$
we have convergence on $(-2,2)$. Inspection also shows convergence
at $x=2$ but not at $x=-2$.
We conclude this section with examples of common power series.