Convergence of Sequences and Series
If we analyze the values of a sequence we can see that certain behaviors or patterns can occur such as the sequence becoming monotonically increasing or decreasing or the sequence staying within a certain range, i.e. being bounded between two values.
Another pattern we may notice is that the terms of the sequence get closer and closer to a certain value as the index \(n\) increases. Let’s consider the following sequence and its terms:
\[a_n = \frac{2n+3}{n} \text{ with } n \in \mathbb{N} \]\(a_1\) | \(a_2\) | \(a_3\) | \(a_{10}\) | \(a_{1000}\) | \(a_{100000}\) |
---|---|---|---|---|---|
5 | 3.5 | 3 | 2.3 | 2.003 | 2.00003 |
As \(n\) grows larger, the terms of the sequence get closer and closer to 2. This value is called the limit of the sequence. If the terms of the sequence get closer to some value as \(n\) increases, then we say the sequence converges to that value. If the terms of the sequence do not get close to a certain value, then we say the sequence diverges. Another common phrasing is that as \(n\) approaches infinity, the terms of the sequence approach the limit. We can formally write this as:
\[\lim_{n \to \infty} \frac{2n+3}{n} = 2 \]Another common way to refer to the limit is by saying that the limes of the sequence is 2. The limes is the latin word for limit.
Epsilon-Neighborhood
To formally define the limit of a sequence, we first introduce the concept of an epsilon neighborhood (or epsilon strip). This is a region around a suspected limit with a radius of \(\epsilon\), where \(\epsilon > 0\).
The index of the term that first enters this neighborhood is called the dipping number or entry index, and is denoted as \(N_{\epsilon}\) or \(N_0\). So for example if we have a sequence \(a_n\) that converges to a limit \(L\), then for every \(\epsilon > 0\) we can define the neighborhood around \(L\) as:
\[\{n \in \mathbb{N} | \quad |a_n - L| < \epsilon\} \text{ or } \{n \in \mathbb{N} | a_n \in (L-\epsilon, L+\epsilon)\} \]The entry index \(N_{\epsilon}\) is the index of the term that first enters this neighborhood. This means that for all \(n \geq N_{\epsilon}\), the terms \(a_i\) are in the neighborhood around \(L\), i.e. \(a_i \in (L-\epsilon, L+\epsilon) \forall i \geq N_{\epsilon}\).
For a given sequence \(a_n = \frac{2n+3}{n}\) and \(\epsilon = 0.1\), we can calculate the entry index \(N_{0.1}\) by solving the inequality:
\[\begin{align*} \left|\frac{2n+3}{n} - 2\right| < 0.1 \\ \left|\frac{3}{n}\right| < 0.1 \\ \frac{3}{n} < 0.1 \\ 3 < 0.1n \\ 30 < n \end{align*} \]So for \(\epsilon = 0.1\), the entry index \(N_{0.1} = 31\). We can check this by calculating the terms of the sequence for \(n=29\), \(n=30\), and \(n=31\):
- \(a_{29} = \frac{2*29+3}{29} = 2.1034\)
- \(a_{30} = \frac{2*30+3}{30} = 2.1\)
- \(a_{31} = \frac{2*31+3}{31} = 2.0968\)
The entry index is 31 not 30 as the terms must be less than 0.1 away from the limit.
This is what D’Alembert and cauchy used to make the first definition of a limit. A sequence \(a_n\) converges to a limit \(L\) if for every \(\epsilon > 0\) there exists an entry index \(N_{\epsilon}\) such that for all \(n \geq N_{\epsilon}\) the terms \(a_n\) are in the \(\epsilon\)-neighborhood around \(L\). In other words, after a certain point in the sequence, all terms are within \(\epsilon\) of the limit. More formally:
\[L \text{ is the limit of } a_n \text{ if } \forall \epsilon > 0, \exists N_{\epsilon} \in \mathbb{N} \text{ such that } \forall n \geq N_{\epsilon}, |a_n - L| < \epsilon \]Another way to define the limit is to say that a sequence \(a_n\) converges to a limit \(L\) if for every \(\epsilon > 0\) there exists a finite number of terms in the sequence that are outside the \(\epsilon\)-neighborhood. These two definitions are equivalent. As in the first definition, there are a finite many terms outside the neighborhood, all terms where \(n \leq N_{\epsilon}\) and which then means there are an infinite number of terms inside the neighborhood. We can define the elements outside the neighborhood just like we did with the elements inside the neighborhood:
\[\{n \in \mathbb{N} | \quad |a_n - L| \geq \epsilon\} \text{ or } \{n \in \mathbb{N} | a_n \notin (L-\epsilon, L+\epsilon)\} \]An important note is that the limit needs to be a real number if it is a real valued sequence. The limit can not be infinity as infinity is not a real number. So if a sequence converges to infinity, it is divergent. We can also define this more formally. A sequence \(a_n\) converges/diverges to \(+\infty\) if for every \(T > 0\) there exists an entry index \(N_T\) such that for all \(n \geq N_T\) the terms are:
\[\lim_{n \to \infty} a_n = +\infty \text{ if } \forall T > 0, \exists N_T \in \mathbb{N} \text{ such that } \forall n \geq N_T, a_n > T \]The same holds for \(-\infty\) if \((-a_n)\) converges to \(+\infty\).
If a sequence converges to the limit 0 then we say that the sequence is a null sequence. These are important sequences to look at as they are the building blocks for many other sequences and also later for series.
For each sequence the limit is unique. So if a sequence converges to a limit, then this limit is unique. The proof of this is rather intuitive. Let’s assume that a sequence converges to two different limits \(L\) and \(M\). Then if we define an epsilon so that the neighborhood around \(L\) and \(M\) do not overlap. For example if \(\epsilon = \frac{|L-M|}{2}\) then the neighborhoods around \(L\) and \(M\) do not overlap.
\[]L-\epsilon, L+\epsilon[ \cap ]M-\epsilon, M+\epsilon[ = \emptyset \]
We know that by definition that all terms outside of an epsilon neighborhood are finite and the terms inside the neighborhood are infinite. However, if the neighborhoods do not overlap then the infinite epsilon neighborhood around \(L\) is part of the finite elements outside the neighborhood around \(M\) and vice versa. This is a contradiction and therefore the limit must be unique.
We can also show that all sequences that converge are bounded. intuitively this might make sense to some but we can also prove this. If a sequence converges to a limit \(L\) then we can choose \(\epsilon = 1\). We then know that there are infinite many terms in the neighborhood around \(L\). So these values are bounded by \(L+1\) and \(L-1\). We then also know that there are only a finite number of terms outside of this neighborhood and that these together cover the whole sequence. To find the bounds we then need to find the maximum and minimum of the finite terms. We know that these values are larger than \(L+1\) and smaller than \(L-1\) as otherwise they would be in the neighborhood. So the sequence is bounded (For real valued sequences, otherwise if one of the bounds is infinity then the sequence is not bounded).

However, the other way around is not true. Not all bounded sequences converge. For example the sequence \(a_n = (-1)^n\) is bounded between -1 and 1 but does not converge. This is because the terms keep switching between -1 and 1 and do not get closer to a certain value.
\[a_n \text{ converges} \implies a_n \text{ is bounded} \]For the constant sequence such as \(a_n = 5\) the limit is the constant itself, so in this case 5. This is rather obvious but we can also prove it. By definition we know that after some specific index \(N_{\epsilon}\) all terms have to be within the epsilon neighborhood around the limit and because epsilon is larger than 0 so get the following that needs to be satisfied for all \(n \geq N_{\epsilon}\):
\[|a_n - L | < \epsilon \implies |a_n - L| = 0 < \epsilon \]In this case \(L\) is simply 5. This can also be generalized to any constant sequence:
\[(a_n)_{n \geq 1} = c \implies \lim_{n \to \infty} a_n = c \]Because the following always holds:
\[\begin{align*} |a_n - c| = |c - c| = 0 \forall n \geq 1 \\ |a_n - c| < \epsilon \forall n \geq 1 \text{ for any } \epsilon > 0 \end{align*} \]The next sequence we can look at is the following:
\[(a_n)_{n \geq 1} = \frac{1}{n} = 1, \frac{1}{2}, \frac{1}{3}, \ldots \]For this sequence again we can get \(|a_n - L| < \epsilon\) for all \(n \geq N_{\epsilon}\). Because of Archimedes principle we know that for any \(x > 0\) there exists a \(n \in \mathbb{N}\) such that \(y \leq nx\) or \(y \geq nx\) for all \(y \in \mathbb{R}\). In this case the first inequality is important. If we set \(x = \epsilon\) and \(y = 1\) then we know the following:
\[\begin{align*} y \leq nx \\ 1 \leq N_{\epsilon} \epsilon \\ \frac{1}{N_{\epsilon}} \leq \epsilon \end{align*} \]So we know that some index \(N_{\epsilon} \in \mathbb{N}\) exists such that:
\[\frac{1}{N_{\epsilon}} < \epsilon \]Which then means that for all \(n \geq N_{\epsilon}\) the following holds:
\[|a_n - 0| = \frac{1}{n} \leq \frac{1}{N_{\epsilon}} < \epsilon \]So if we had \(\epsilon = 0.8\) then we can calculate $N_0.8 by doing the following:
\[\begin{align*} \frac{1}{N_{0.8}} < 0.8 \\ \frac{1}{N_{0.8}} < \frac{8}{10} \\ \frac{5}{10} < \frac{1}{N_{0.8}} \\ \frac{1}{2} < \frac{1}{N_{0.8}} \end{align*} \]So \(N_{0.8} = 2\) which means that for \(\epsilon = 0.8\) all terms after \(n=2\) are within the epsilon neighborhood around 0. So the limit of the sequence is 0.
From the script: using archimedes principle to show that the limit of \(a_n = \frac{n}{n+1} = 1\)
We have seen lots of examples of sequences that converge to a limit. But what about sequences that do not converge? Let’s look at the following sequence:
\[(a_n)_{n \geq 1} = (-1)^n = -1, 1, -1, 1, \ldots \]Intuitively it is clear that this sequence does not converge as the terms keep switching between -1 and 1. We can also prove this by contradiction. Let’s assume that the sequence converges to a limit \(L\). We know that \(|a_n - a_{n+1}| = 2\) for all \(n \in \mathbb{N}\). The the following must hold for \(\epsilon > 0\) and \(\forall n \geq N_{\epsilon}\):
\[|a_n - L| < \epsilon \]Let’s assume by contradiction that a limit exists and that \(\epsilon = \frac{1}{2}\). Then we know that there exists an index \(N_{\epsilon}\) such that for all \(n \geq N_{\epsilon}\) the following holds:
\[\begin{align*} 2 &= |a_n - a_{n+1}| \\ &= |a_n - L + L - a_{n+1}| \\ &\leq |a_n - L| + |L - a_{n+1}| \\ &\leq \epsilon + \epsilon = 1 \\ 2 &\leq 1 \]This is a contradiction and therefore the sequence does not converge. Note that we just added 0 by adding and subtracting \(L\) and then used the triangle inequality.
Lastly let’s look at the sequence \((a_n)_{n \geq 1} = n\). This sequence does not converge as the terms keep getting larger and larger. We can prove this by contradiction. Let’s assume that the sequence converges to a limit \(L\). Then for \(\epsilon = 1\) we know that there exists an index \(N_{\epsilon}\) such that for all \(n \geq N_{\epsilon}\) the following holds:
\[|a_n - L| < 1 \]However, this already does not hold for the first term as \(|a_1 - L| = |1 - L| \geq 1\) is not possible. So the sequence does not converge.
Properties of Convergent Sequences
Sind \(a_n\) und \(b_n\) konvergente Folgen mit den Grenzwerten \(a\) bzw. \(b\), so ist auch die Folge:
-
\(c*a_n\) konvergent mit \(\lim_{n \to \infty} {c*a_n} = c*\lim_{n \to \infty} {a_n} = c*a\) für \(c \in R\)
-
\(a_n \pm b_n\) konvergent mit \(\lim_{n \to \infty}{a_n \pm b_n}={{\lim_{n \to \infty}{a_n}} \pm {\lim_{n \to \infty}{b_n}}}={a \pm b}\)
Example from her notes with \(a_n = n \over n+1\) and splits it into \(1 \over n\) and \(1 \over n+1\) and then shows that the limit is the sum of the limits.
- \(a_n *b_n\) konvergent mit \(\lim_{n \to \infty}{a_n* b_n}={{\lim_{n \to \infty}{a_n}} * {\lim_{n \to \infty}{b_n}}}={a * b}\)
Example from her notes with \(a_n = (1 + \frac{1}{n})^b\)
- \(a_n \over b_n\) konvergent mit \(\lim_{n \to \infty}{a_n \over b_n}={{\lim_{n \to \infty}{a_n}} \over {\lim_{n \to \infty}{b_n}}}={a \over b}\) falls \(b \neq 0\)
Comes from the above. example could be \(a_n = \frac{n^2 - 2n}{n^2 + n + 1}\)
-
If there exists a \(K \geq 1\) and \(a_n \leq b_n\) for all \(n \geq K\) then \(\lim_{n \to \infty}{a_n} \leq \lim_{n \to \infty}{b_n}\) in other words \(a \leq b\).
-
Das Produkt einer beschränkten Folge und einer Nullfolge ist immer eine Nullfolge.
Squeeze Theorem
The squeeze theorem or sometimes also called the sandwich theorem states that if we have two sequences \((a_n)_{n\geq 1}\) and \((c_n)_{n\geq 1}\) that have the same limit \(L\) so:
\[\lim_{n \to \infty} a_n = \lim_{n \to \infty} c_n = L \]and a third sequence \((b_n)_{n\geq 1}\) for which after a certain index \(K\) the following holds:
\[a_n \leq b_n \leq c_n \text{ for all } n \geq K \]Then the sequence \((b_n)_{n\geq 1}\) also converges to \(L\) so:
\[\lim_{n \to \infty} b_n = \lim_{n \to \infty} a_n = \lim_{n \to \infty} b_n = L \]This is a very simple but powerful theorem that can be used to show that many sequences converge to a certain limit. The idea is that if we can find two sequences that are always above and below the sequence we are interested in and these two sequences converge to the same limit, so they slowly get closer and closer to each other and squeeze the sequence we are interested in, then the sequence we are interested in also converges to the same limit.

Let’s look at the sequence \(a_n = \frac{sin(n)}{n}\). We know that the sine function is bounded between -1 and 1 and then that the following inequalities hold:
\[\frac{-1}{n} \leq \frac{sin(n)}{n} \leq \frac{1}{n} \]As \(n\) goes to infinity the terms on the left and right side go to 0. So by the squeeze theorem the sequence \(a_n = \frac{sin(n)}{n}\) also converges to 0.
\[\lim_{n \to \infty} \frac{sin(n)}{n} = 0 \]Monotone Convergence Theorem
Theorem by Karl Weierstrass:
If the sequence \((a_n)_{n\geq 1}\) is monotonically increasing and bounded above, then it converges. More precisely we can determine the limit as:
\[\lim_{n \to \infty} a_n = \sup\{a_n | n \geq 1\} \]The same holds for monotonically decreasing sequences that are bounded below. Here the limit uses the infimum instead of the supremum obviously.
\[\lim_{n \to \infty} a_n = \inf\{a_n | n \geq 1\} \] \[a_n \text{ is monotone} \land a_n \text{ is bounded} \implies a_n \text{ converges} \]The other direction does not hold. For example the sequence \(a_n = (-1)^n \frac{1}{n}\) converges to 0 but it is only bounded by 1 and -1 and not monotonically increasing or decreasing as it oscillates between -1 and 1.

Polynomial vs Exponential
\(n^bq^n = 0\) Shows that any power of \(n\) is smaller than any exponential function important for algorithms.
\(n^{1\over n} = 1\)
Limits of Recursive Sequences
From notes more generally \(a_n = 1 \over 2 (a_{n-1} + c \over a_{n-1})\) and \(a_1 = c\) using Weierstrass \(c > 1\).
recursive defined sequences \(a_{n+1} = 1 \over 2 (a_n + 2 \over a_n)\) goes to \(\sqrt{2}\)
Euler’s Number
Lots of possible origins but one is analyzing the compound interest formula.
First comes Bernoulli’s Inequality.Does this come from probability theory?
\[(1+x)^n \geq 1+nx \text{ for all } x \geq -1 \text{ and } n \in N \] \[\lim_{n \to \infty} (1+{1\over n})^n = e \]Limes Superior and Inferior
Cauchy Criterion
Balzano-Weierstrass Theorem
Sequences of Vectors and Complex Numbers
Important Sequences
Artihmetic Sequences
Zero Sequences
Harmonic Sequences
Nullfolge eine Folge die den Grenzwert 0 besitzt. Harmonische Folge \(a_n={1\over n}\) ist eine Nullfolge
Geometric Sequences
Folgen der Form: \(a_n= a_1 *q^{n-1}\) sind geometrische Folgen. Jedes Glied ist das geometrische Mittel seiner beiden Nachbarglieder \(a_n=\sqrt {a_{n-1}+a_{n+1}}\)
Eine geometrische Folge \(a_n= a_1* q^{n-1}\)
- mit \(|q|>1\) ist divergent
- mit \(|q|<1\) ist konvergent mit Grenzwert 0
- mit \(q=1\) ist eine konstante Folge \(a_1\)
- mit \(q=-1\) ist divergent, da alternierend.
Rational Sequences
Für eine rationale Folge, die im Zähler aus einem Polynom k-ten Grades und im Nenner aus einem Polynom l-ten Grades besteht, gilt:
\[ \lim_{n \to \infty}{{a_kn^k+a_{k-1}n^{k-1}+...+a_0}\over{b_ln^l+b_{l-1}n^{l-1}+...+b_0}} = \begin{dcases} {a_k\over b_k} *\infty, falls\space k >l \\ {a_k\over b_k} , falls\space k=l \\ 0 , falls\space k<l \end{dcases} \]Convergence of Series
We have seen that a series is the sum of the terms of a sequence.
Just like sequences, series can also converge or diverge. A series converges if the sequence of partial sums converges. So in other words using the original sequence we calculate a new sequence, where each term is the sum of all terms up to that point.
\[\begin{align*} \text{Sequence} & : (a_1, a_2, a_3, \ldots, a_n) \\ \text{Series} & : S_n = \sum^{n}_{k=1}{a_k} = a_1 + a_2 + a_3 + \ldots + a_n \\ \text{Sequence of partial sums} & : S_1, S_2, S_3, \ldots, S_n \end{align*} \]If the sequence of partial sums converges then the series converges. The limit of the sequence of partial sums is called the sum or value of the series. If the sequence of partial sums diverges then the series diverges.
For a series to converge the underlying sequence must be a null sequence, in other words the limit of the original sequence must be zero. This is a necessary but not sufficient condition. There are series that diverge even though the sequence of terms converges to zero.
\[\sum_{n=1}^{\infty} a_n \text{ converges} \implies \lim_{n \to \infty} a_n = 0 \]There is an intuitive interpretation behind this condition. Imagine you’re summing up the terms of a series. For the series to converge, the partial sums need to settle on a finite value as you keep adding more and more terms. If the terms of the sequence do not approach zero, it becomes impossible for the partial sums to settle, and the series will diverge. In short, if the terms are not getting smaller and smaller, the series will keep getting larger and larger and will not converge.
Geometric Series
Shouldn’t this be ar^k instead of just r^k?
Let’s look at some example of series and analyze their convergence. The geometric series is a good example to start with. We define the geometric series for a value \(q \in \mathbb{C}\) where \(|q| < 1\) as:
\[\sum^{\infty}_{k=0}{q^k} \]For the series to converge we need to check if the sequence of partial sums converges. The sequence of partial sums is:
\[\begin{align*} S_n &= \sum^{n}_{k=0}{q^k} = 1 + q + q^2 + \ldots + q^n \\ q * S_n &= q + q^2 + q^3 + \ldots + q^{n+1} \\ S_n - q * S_n &= 1 - q^{n+1} \\ (1-q) * S_n &= 1 - q^{n+1} \\ S_n &= \frac{1 - q^{n+1}}{1-q} \end{align*} \]Now we have a closed form for the sequence of partial sums. We can now take the limit of the sequence of partial sums to see if the series converges. As \(n \to \infty\) the term \(q^{n+1}\) goes to zero as \(|q| < 1\). So we can assume the limit exists and that it is \(\frac{1}{1-q}\). Let’s prove it is indeed the limit:
\[\lim_{n \to \infty}\left|\frac{1 - q^{n+1}}{1-q} - \frac{1}{1-q}\right| = \lim_{n \to \infty}\left|\frac{1 - q^{n+1} - 1}{1-q}\right| = \lim_{n \to \infty}\left|\frac{- q^{n+1}}{1-q}\right| = \lim_{n \to \infty}\left|\frac{q^{n+1}}{1-q}\right| = 0 \]So sequence of partial sums converges to \(\frac{1}{1-q}\) with \(|q| < 1\), which means the geometric series converges to \(\frac{1}{1-q}\).
Let’s look at the geometric series for \(q = \frac{1}{2}\):
\[\begin{align*} \sum^{\infty}_{k=0}{\left(\frac{1}{2}\right)^k} &= 1 + \frac{1}{2} + \frac{1}{4} + \frac{1}{8} + \ldots \\ &= \frac{1}{1-\frac{1}{2}} = \frac{1}{\frac{1}{2}} = 2 \end{align*} \]But what if the index starts at 1 rather than 0? Let’s revist the closed form for the sequence of partial sums:
\[\begin{align*} \sum^{\infty}_{k=1}{q^k} &= q + q^2 + q^3 + \ldots \\ &= \sum^{\infty}_{k=0}{q^k} - q^0 = \frac{1}{1-q} - 1 = \frac{q}{1-q} \end{align*} \]We know that this sequence still converges. So we can say that what we do in the first steps does not have an effect on the convergence of the series. However, it does have an effect on the value of the series. So the geometric series for \(q = \frac{1}{2}\) starting at 1 converges to \(\frac{\frac{1}{2}}{1-\frac{1}{2}} = 1\) instead of 2.
Harmonic Series
Add the proof for the harmonic series. Probably with Cauchy criterion.
diverges. Despite the sequence of terms converging to zero, the series diverges. This is a good example to show that the terms of the sequence converging to zero is a necessary but not sufficient condition for the series to converge.
Telescope Series
Add the proof for the telescope series, show visually why it is called a telescope series.
converges to 1.
Beware of Infinite Sums
An important note: In the case of series we can not just use our usual rules for the sum operator. The reason is because the sums go to infinity and we can not just use the normal rules of arithmetic.
So if we have a series \(\sum^{\infty}_{k=1}{a_k}\) und \(\sum^{\infty}_{j=1}{b_j}\) and they converge then the following rules apply:
- \(\sum^{\infty}_{k=1}{c*a_k}=c* \sum^{\infty}_{k=1}{a_k}\) für \(c \in C\)
- \(\sum^{\infty}_{k=1}{a_k\pm b_k}=\sum^{\infty}_{k=1}{a_k}\pm \sum^{\infty}_{k=1}{b_k}\)
For this lets compare two example \(\sum^{\infty}_{n=1}{\frac{1}{n(n+1)}\) into \(\sum^{\infty}_{n=1}{\frac{1}{n}} - \sum^{\infty}_{n=1}{\frac{1}{n+1}}\) We already know this is the telescoping series that converges to 1. So in this infinity minus infinity case we get 1.
But if we try to do the same for the value of 0 and the sum over 0. We can split it into \(\sum^{\infty}_{n=1}{1} - \sum^{\infty}_{n=1}{1}\) which is not correct as these then diverge and we suddenly get a value of 1 (infinity minus infinity).