# Laurent Series

We’ve established that if $f$ has an isolated singularity at $z_0$ and that $f$ has a pole of order $m$ at $z_0$ then $f(z) = \sum^{m}_{j=1} b_j\frac{1}{(z-z_0)^j} + \sum^{\infty}_{k=0} c_k(z-z_0)^k$ where the coefficients are as follows: $c_k = \frac{1}{2 \pi i} \int_{\gamma} \frac{f(w)}{(w-z_0)^{k+1}} dw$ and $b_j = \frac{1}{2 \pi i} \int_{\gamma} f(w)(w-z_0)^{j-1} dw$ where $\gamma$ is some circle taken once in the standard direction that encloses $z_0$ but no other singular point of $f$.

So here is what we will prove: if there exits $R > r$ where $f$ is analytic on the set $R > |z-z_0| > r$ then $f$ has a Laurent series expansion about $z_0$ valid for all $z$ in this set. Now IF $z_0$ is an singular point and our set is a punctured disk ($r = 0$) then the singularity is essential, if and only if the principal part (the part of the Laurent series with the negative powers of $(z-z_0)$ has an infinite number of terms, and we have NOT proved that such a series exists in this case. We will remedy this now.

In this diagram: the (possible) singularity is at $z_0$, (this construction does not require that $z_0$ be a singular point) the large $\Gamma$ is a circle between $z_0$ and another singularity (that is why this works for isolated singularities), $z$ is some point enclosed by $\Gamma$ where we want to evaluate $f$ and small $\gamma$ is a circle between $z$ and $z_0$. The green circle about $z$ is one between $\Gamma$ and $\gamma$ and note that $f$ is analytic inside and just outside of this circle, so Cauchy’s Theorem applies.

Let $G$ denote the small circle enclosing $z$ taken half way around in the standard direction and note that $f(z) = \frac{1}{2 \pi i} \int_G \frac{f(w)}{w-z} dw$. But by the theory of cuts (integrate along the $\Gamma$ in the standard direction, along one cut line, half way around $G$ in the opposite direction, once around $\gamma$ in the opposite direction, half around $G$ in the opposite direction, and back along the cut line to $\Gamma$ we see that $-\int_G \int_G \frac{f(w)}{w-z} dw + \int_{\Gamma} \frac{f(w)}{w-z} dw - \int_{\gamma} \frac{f(w)}{w-z} dw =0$ as together, these paths bound a simply connected region where $f$ is analytic (and the integrals along the cut lines cancel out).

So we get $2 \pi i f(z) = \int_G \frac{f(w)}{w-z} dw = \int_{\Gamma} \frac{f(w)}{w-z} dw - \int_{\gamma} \frac{f(w)}{w-z} dw$ We switch the sign on the second integral:

$2 \pi i f(z) = \int_G \frac{f(w)}{w-z} dw = \int_{\Gamma} \frac{f(w)}{w-z} dw + \int_{\gamma} \frac{f(w)}{z-w} dw$

We now make the following observations: in the first integral (along $\Gamma$ ) we note that $|w-z_0| > |z-z_0|$ and in the second integral (along $\gamma$) $|z-z_0| > |w-z_0|$

Now look at the first integral..in particular the fraction $f(w) \frac{1}{w-z} = f(w) \frac{1}{w-z_0 -(z-z_0)} = f(w) \frac{1}{w-z_0} \frac{1}{1-\frac{z-z_0}{w-z_0}}$

$= f(w) \frac{1}{w-z_0} (1 + \frac{z-z_0}{w-z_0} + (\frac{z-z_0}{w-z_0})^2 + (\frac{z-z_0}{w-z_0})^3...)$ because we are in the region where the geometric series converges absolutely.

So our integral becomes $\int_{\Gamma} \frac{f(w)}{w-z} dw = \int_{\Gamma} f(w) \sum^{\infty}_{k=0} \frac{(z-z_0)^k}{(w-z_0)^{k+1}}dw$. In previous work we’ve shown that we can interchange integration and summation in this case so, we end up with : $\sum^{\infty}_{k=0} (\int_{\Gamma} \frac{f(w)}{(w-z_0)^{k+1}} dw)(z-z_0)^k$ which yields the regular part of the Laurent series.

We know turn to what will become the principal part: the second integral.

$\int_{\gamma} \frac{f(w)}{z-w} dw$. Let’s focus on the fraction $\frac{1}{z-w}$

Now we write $\frac{1}{z-w} = \frac{1}{z-z_0-(w-z_0)} = \frac{1}{z-z_0}\frac{1}{1-\frac{w-z_0}{z-z_0}}$ and recall that, on $\gamma$, $|\frac{w-z_0}{z-z_0}| < 1$ so we can expand this into an infinite series (bounded by a convergent geometric series)

$\frac{1}{z-z_0}\frac{1}{1-\frac{w-z_0}{z-z_0}} = \frac{1}{z-z_0} (1 + \frac{w-z_0}{z-z_0} + (\frac{w-z_0}{z-z_0})^2 + (\frac{w-z_0}{z-z_0})^3....$

So now going back to the integral

$\int_{\gamma} \frac{f(w)}{z-w} dw = \int_{\gamma} f(w) \sum^{\infty}_{k=0} \frac{(w-z_0)^k}{(z-z_0)^{k+1}}$ and, once again, because the series is bounded by a convergent geometric series (details similar to those we used in developing the power series) we can interchange summation and integration to obtain

$\sum^{\infty}_{k=0} (\int_{\gamma} f(w)(w-z_0)^k dw ) \frac{1}{(z-z_0)^{k+1}}$. It is traditional to shift the index to write it as:

$\sum^{\infty}_{k=1} (\int_{\gamma} f(w)(w-z_0)^{k-1} dw ) \frac{1}{(z-z_0)^{k}}$

So, adding the two series together we have:

$2 \pi i f(z) =$

$\sum^{\infty}_{k=1} (\int_{\gamma} f(w)(w-z_0)^{k-1} dw ) \frac{1}{(z-z_0)^{k}} + \sum^{\infty}_{k=0} (\int_{\Gamma} \frac{f(w)}{(w-z_0)^{k+1}} dw)(z-z_0)^k$

Now divide both sides by $2 \pi i$ to obtain the desired result.

One caveat To define a Laurent series, one needs to know: the center about which one is expanding about and the annulus or disk of convergence. The derivation that we used above does not require $z_0$ to be a singularity or for there to be only 1 singularity inside of $\gamma$; we just require that $f$ be analytic on some open region containing the annulus with boundary circles $\Gamma \cup \gamma$

Actually obtaining the Laurent series

I did NOT show (but it is true) that Laurent series are unique (we did that for power series), but the general principle still applies. Once you get a Laurent series for a function about a singularity (often valid on some punctured disk about a singularity), you have them all. But if you looked at the proof, what was required is that $f$ be analytic in the region bounded by $\Gamma$ and $\gamma$ (an open annulus where the smaller boundary circle encloses the singularity in question).

So our Laurent series is valid on an punctured disk or on an annulus..with the later being true if, say, there were two singularities.

Now as far as obtaining: we do that by “hook or crook” and almost never by actually calculating those dreadful integrals.

1) $sin(\frac{1}{z})$ has an isolated essential singularity at $z=0$ and its Laurent series can be obtained by substituting $\frac{1}{z}$ for $z$ in the usual Taylor series:

$\sum^{\infty}_{k=1} (-1)^{k+1} \frac{1}{(2k-1)!z^{2k-1}} = \frac{1}{z} - \frac{1}{3!z^3} + \frac{1}{5!z^5} ...$

2) Now let’s look at something like $\frac{1}{1+z^2}$ Here there are two isolated poles of order 1: $i$ and $-i$. So the series, we get, will depend on where we expand from AND where we want the expansion to be valid.

Two of the “obvious” expansion points would be $i$ and $-i$ and we’d expect a radius of validity to be 2 for each of these. We could also expand about, say, zero and expect a power series expansion to have radius 1. Now if we want a series to be, say, centered about 0 and be valid for a radius that is GREATER than 1, we can look at the infinite annulus $|z| > 1$ whose inner boundary circle encloses both singularities.

So let us start:

1) about $z_0 = 0$: here write $\frac{1}{1+z^2} = \frac{1}{1- (-z^2)} = 1 -z^2 + z^4-z^6...=\sum^{k=\infty}_k=0 (-1)^kz^{2k}$. Because $f$ is analytic on $|z| < 1$ this is a power series with no principal part.

2) about $z = i$ One way to do it is to revert to partial fractions:

$\frac{1}{1+z^2} = \frac{1}{(z-i)(z+i)} = \frac{A}{z-i} + \frac{B}{z+i} \rightarrow$

$Az+Ai + Bz-Bi = 1 \rightarrow A+B=0, A-B = -i \rightarrow 2A = -i \rightarrow A =-\frac{i}{2}, B = \frac{i}{2}$

So $\frac{1}{1+z^2} = \frac{-i}{2} \frac{1}{z-i} + \frac{i}{2} \frac{1}{z+i}$. Note the second sum is analytic near $z_0 = i$ so we can write a power series expansion:

$\frac{i}{2} \frac{1}{z-i+2i} = \frac{i}{2} \frac{1}{2i} \frac{1}{1+ \frac{z-i}{2i}} = \frac{1}{4} \sum^{\infty}_{k=0} (\frac{1}{-2i})^k (z-i)^k =\frac{1}{4} \sum^{\infty}_{k=0}(\frac{i}{2})^k (z-i)^k$ which is the regular part of the Laurent series.

The total series is

$\frac{-i}{2} \frac{1}{z-i} + \sum^{\infty}_{k=0}(\frac{i}{2})^k (z-i)^k$ and the principal part has only one non-zero term, as expected.

The punctured disk of convergence has radius 2, as expected. I’ll leave it as an exercise to find the Laurent series about $z_0 = -i$ but the result will look very similar.

Now for the Laurent series centered at $z_0 = 0$ but valid for $|z| > 1$. This will have an infinite number of terms with negative exponent but…this series is NOT centered at either singularity.

$\frac{1}{1+z^2} = \frac{i}{2}(\frac{1}{i+z} -\frac{1}{i-z}) = \frac{i}{2}\frac{1}{z}(\frac{1}{1+\frac{i}{z}} -\frac{1}{1-\frac{i}{z}}) =\frac{i}{2}\frac{1}{z} (\sum^{\infty}_{k=0} ((-1)^k(\frac{i}{z})^k -(\frac{i}{z})^k)$

$= \frac{i}{2}\frac{1}{z} (\sum^{\infty}_{k=1} ((-1) 2 (\frac{i}{z})^{2k-1})$ (the even powers cancel each other)

$= \sum^{\infty}_{k=1} ((-1)(i)^{2k} (\frac{1}{z})^{2k} =\sum^{\infty}_{k=1}(-1)^{k+1}\frac{1}{z^{2k}}$

Which equals $\frac{1}{z^2} - \frac{1}{z^4} + \frac{1}{z^6} ....$

Try graphing $\frac{1}{1+x^2}$ versus $\frac{1}{x^2} - \frac{1}{x^4} + \frac{1}{x^6}...$ up to, say the 14’th power, 16’th power…22’nd power, etc. on the range, say, $[1, 4]$. That gives you an idea of the convergence.

# Poles and zeros

We know what a zero of a function is and what a zero of an analytic function is: if $f$ is analytic at point $z_0$ for which $f(z_0) = 0$ then $f(z) = (z-z_0)^m g(z)$ where $g(z_0) \neq 0$ and $g$ is analytic..and this decomposition is unique (via the uniqueness of a power series). $m$ is the order of the zero.

What I did not tell you is that a zero of a NON-CONSTANT analytic function is isolated; that is, there is some $r > 0$ such that: if $A = \{w| |w-z_0| < r \}$ is the open disk of radius $r$ about $z_0$ and $w \in A$ then $f(w) \neq 0$. That is, a zero of an analytic function can be isolated from other zeros.

Here is why: Suppose not; then there is some sequence of $z_k \rightarrow z_0$ where $f(z_k) = 0$ (how to construct: choose a zero $z_1$ to be closer than 1 to $z_0$ and then let $z_2$ be a zero of $f$ that is less than half the distance between $z_2$ and $z_0$ and keep repeating this process. It cannot terminate because if it does, it means that the zeros are isolated from $z_0$.

Note that for $k \neq 0$ we have $f(z_k) = (z_k-z_0)^m g(z_k) = 0$ which implies that $g(z_k) = 0$. But $g$ is analytic and since $g(z_k) = 0$ for all $k$ then $g(z_k) \rightarrow g(z_0) = 0$ which contradicts the fact that $g(z_0) \neq 0$. So, the zeros of an analytic function are isolated.

In fact, we can say even more (that an analytic function is a “discrete mapping”..that is, roughly speaking, takes a discrete set to a discrete set) but we’ll leave that alone, at least for now. See: Discrete Mapping Theorem in Bruce Palka’s complex variables book

But for now, we’ll just stick with this.

Note: in real analysis, there is the concept of a “bump function”: one which is zero on some interval (or region), not zero off of that region and is infinitely smooth (has derivatives of all orders). This cannot happen with analytic complex functions.

Now onto poles.

A singularity of a complex function is a point at which the complex function is not analytic. A singularity is isolated if one can put an open disk around it on which the function is analytic EXCEPT at the singularity.

More formally: say $f$ has an isolated singularity if $f$ is NOT analytic at $z_0$ but $f$ is analytic on some set $0 < |z-z_0 | < R$ for some $R > 0$. (these sets are referred to as “punctured disks” or “deleted disks”. )

Example: $\frac{1}{z}$ has an isolated singularity at $z = 0$. $sec(z)$ has isolated singularities at $\frac{pi}{2} \pm k \pi, k \in \{0, 1, 2,...\}$

Example: $\frac{1}{z^n - 1}$ has isolated singularities at $w_k, k \in \{1, 2, ...n \}$ where $w_k$ is one of the n-th roots of unity. $e^{\frac{1}{z}}$ has an isolated singularity at $z = 0$

Example: $f(z) = \overline{z}$ is not analytic anywhere an therefore has no isolated singularities.
Example: $f(z) = \frac{1}{sin(\frac{1}{z})}$ has isolated singularities at $z = \frac{1}{\pm k \pi}, k \in \{1, 2, ..\}$ and has a non-isolated singularity at $z = 0$. $Log(z)$ has non isolated singularities at the origin and the negative real axis.

We will deal with isolated singularities.

There are three types:

1. Removable or “inessential”. This is the case where $f$ is technically not analytic at $z_0$ but $lim_{z \rightarrow z_0}f(z)$ exists. Think of functions like $\frac{sin(z)}{z}, \frac{e^z-1}{z},$ etc. It is easy to see what the limits are; just write out the power series centered at $z = 0$ and do the algebra.

What we do here is to say: if $lim_{z \rightarrow z_0} f(z) = l$ then let $g(z) = f(z), z \neq z_0, g(z_0) = l$ and note that $g$ is analytic. So, it does no harm to all but ignore the inessential singularities.

2. Essential singularities: these are weird objects. Here is what I will say: $f$ has an essential singularity at $z_0$ if $z_0$ is neither removable nor a pole. Of course, I need to tell you what a pole is.

Why these are weird: if $f$ has an essential singularity at $z_0$ and $A$ is ANY punctured disk containing $z_0$ in its center but containing no other singularities, and $w$ is ANY complex number (with at most exception), then there exits $z_w \in A$ where $f(z_w) = w$. This is startling; this basically means that $f$ maps every punctured disk around an essential singularity to the entire complex plane, possibly minus one point. This is the Great Picard’s Theorem. (we will prove a small version of this; not the full thing).

Example: $f(z) = e^{\frac{1}{z}}$ has an essential singularity at $z = 0$. If this seems like an opaque claim, it will be crystal clear when we study Laurent series, which are basically power series, but possibly with terms with negative powers.

3. Poles (what we will spend our time with). If $f$ is not analytic at $z_0$ but there is some positive integer $m$ such that $lim_{z \rightarrow z_0} (z-z_0)^m f(z)$ exists, then we say that $f$ has pole of order m at $z_0$.

“Easy examples”: $\frac{1}{z}$ has a pole of order 1 at the origin (called a “simple pole”). $\frac{1}{(z^2+1)(z-2i)^2}$ has simple poles at $\pm i$ and a pole of order 2 (called a “double pole”) at $2i$ $\frac{sin(z)}{z^3}$ has a pole of order 2 at the origin. (if that seems like a strange claim, write out the power series for $sin(z)$ then do the division.)

Relation to zeros:

Fact: if $f$ is analytic and has a zero of order $m$ at $z_0$, then $\frac{1}{f}$ has a pole of order $m$ at $z_0$.

Note: because zeros of analytic functions are isolated, the singularity of $\frac{1}{f}$ is isolated. Now:

$\frac{(z-z_0)^m}{(z-z_0)^m f(z)}$ is analytic at $z_0$ (no zero in the denominator any longer). Note that $m$ is the smallest integer that works because any smaller integer would leave a zero in the denominator.

Fact: if $f$ has a pole of order $m$ at $z_0$ then $\frac{1}{f}$ has a zero of order $m$ at $z_0$.

Reason: Let $g(z) = (z-z_0)^m f(z)$ be the associated analytic function (with m being the smallest integer that works ). Since this is the smallest $m$ that works, we can assume that $g(z_0) \neq 0$ (look at the power series for $g$; if the first term has $(z-z_0)$ to a non zero power, use a lower power of m )

Now $(z-z_0)^m \frac{1}{g(z)} = \frac{1}{f(z)}$ is zero at $z = z_0$ (denominator is not zero).

So the poles and zeros of an analytic function are very closely linked; they are basically the duals of each other. The calculus intuition of “check for zeros in the denominator” works very well here.

Onward to Laurent series and residues

Start with poles or order $m$: If $f$ has a pole of order $m$ at $z_0$ then we know $(z-z_0)^m f(z) = g(z)$ is analytic on some open disk of convergence about $z_0$.

So we can write $g(z) = a_0 + a_1(z-z_0) +a_2(z-z_0)^2 +....= \sum^{\infty}_{k=0}a_k (z-z_0)^k$.

So $(z-z_0)^m f(z) = a_0 + a_1(z-z_0) +a_2(z-z_0)^2 +....= \sum^{\infty}_{k=0}a_k (z-z_0)^k$.

Now divide both sized by $(z-z_0)^m$ and look at what happens to the series:

$f(z) = a_0(z-z_0)^{-m} + a_1(z-z_0)^{-m+1} .....+a_{m-1}(z-z_0)^{-1} + a_{m} + a_{m+1}(z-z_0) + a_{m+2}(z-z_0)^2 + ..$

So while $f$ does not have a power series centered at $z_0$ it does have a series of a sort. Such a series is called a Laurent series. It is traditional to write:

$f(z) = \sum^{\infty}_{j=1} b_j (z-z_0)^{-j} + \sum^{\infty}_{k=0} a_k (z-z_0)^k$. Of course, for a pole of order $m$, at most $m$ terms of the first series will have non-zero coefficients. For an essential singularity, an infinite number of the coefficients will be non-zero. We will see that the first series yields a function that is analytic for $\{ w | |w-z_0| > r \}$ for some $r >0$ and the second series, a power series, is analytic within some open disk of convergence (as usual).

Terminology: $\sum^{\infty}_{j=1} b_j (z-z_0)^{-j}$ is called the principal part of the Laurent series, and $\sum^{\infty}_{k=0} a_k (z-z_0)^k$ is called the regular part.

$b_1$ (the coefficient of the $\frac{1}{z-z_0}$ is called the residue of $f$ at $z_0$

Why this is important: Let $\gamma$ be a small circle that encloses $z_0$ but no other singularities. Then $\int_{\gamma} f(z)dz = 2 \pi i b_1 = 2 \pi i Res(f, z_0)$. This is the famous Residue Theorem and this arises from simple term by term integration of the Laurent series. For a pole it is easy: the integral of the regular part is zero since the regular part is an analytic function, so we need only integrate around the terms with negative powers and there are only a finite number of these.

Each $b_k \frac{1}{(z-z_0)^k}$ has a primitive EXCEPT for $b_1 \frac{1}{z-z_0}$ so each of these integrals are zero as well. So ONLY THE $b_1$ term matters, with regards to integrating around a closed loop!

The proof is not quite as straight forward if the singularity is essential, though the result still holds. For example:

$e^{\frac{1}{z}} = 1 + \frac{1!}{z} + \frac{1}{2!z^2} + \frac{1}{3!z^3} + ..+ \frac{1}{k!z^k} +...$ but the result still holds; we just have to be a bit more careful about justifying term by term integration.

So, if $f$ is at all reasonable (only isolated singularities), then integrating $f$ along a closed curve amounts to finding the residues within the curve (and having the curve avoid the singularities, of course), adding them up and multiplying by $2 \pi i$. Note: this ONLY applies to $f$ with isolated singularities; for other functions (say $f(z) = \overline{z}$) we have to grit our teeth a parameterize the curve the old fashioned way.

Now FINDING those residues can be easy, or at times, difficult. Stay tuned.

# Some Big Theorems

Now we’ve established the following: if $f$ is analytic on an oppen connected set $A$ and $z_0 \in A$, then $f$ has derivatives of all orders and $f(z) = \sum^{\infty}_{k=0} a_k (z-z_0)^k$ where $a_k = \frac{f^{(k)}(z_0)}{k!}$ and where $f^{(k)}(z_0) =\frac{k!}{2\pi i} \int_{C_r} \frac{f(w)}{(w-z_0)^{k+1}} dw$ where $C_r$ is a circle $|z-z_0|=r$ contained in $A$

This puts us in a position to prove some interesting theorems, concepts and results that we will use.

We start with Morera’s Theorem. Basically, this theorem says that if $f$ is continuous on some open connected set $A$ AND if $\int_{\gamma} f(z) dz = 0$ for all closed curves $\gamma$ of a specific type (our text uses triangles, but I’ve seen some texts use rectangles, or even rectangles with sides parallel to the real and imaginary axis), then $f$ is analytic in $A$. Note: the converse is false; example $f(z) = \frac{1}{z}$ is analytic on $C-\{0\}$ but if $\gamma$ is any piecewise simple closed curve enclosing the origin (with standard orientation) then $\int_{\gamma} \frac{1}{z} dz = 2 \pi i$

We will see that this is the prototypical thing that can go wrong.

Now, for a sketch of a theorem: let $z \in A$ and define $F(z) = \int^{z}_{z_0} f(w) dw$ where the integration is along the straight line path from $z_0$ to $z$.

Let $h$ be a small complex number. Then $F(z+h) =\int^{z+h}_{z_0} f(w)dw = \int^{z}_{z_0}f(w)dw + \int^{z+h}_z f(w)dw$ because the integral around a triangle is assumed to be zero. Hence $F(z+h) - F(z) = \int^{z+h}_z f(w)dw$

Now $|\frac{F(z+h)-F(z)}{h} - f(z)| = |\int^{z+h}_z \frac{f(w)-f(z)}{h} dw |$ (note: $f(z) = \int^{z+h}_z \frac{f(z)}{h} dw$)

Choose $z, w$ close enough so that $|f(w) -f(z)| < \epsilon$ ($f$ is continuous).
Then $|\int^{z+h}_z \frac{f(w)-f(z)}{h} dw | < \frac{\epsilon}{|h|}|h| = \epsilon$

It now follows that $F'(z) = f$ (that is, $f$ has a primitive on $A$) and because $F$ is analytic, so is $f$.

Theorem: Let $f$ be analytic on an open connected set $A$ and suppose there is some $z_0 \in A$ where $f^{(k)}(z_0) = 0$ for all $k \in \{0, 1, 2,... \}$. Then $f$ is identically zero on $A$.

Note: this is false for real variable functions.

Proof: a careful proof depends on some topology and more advanced properties, so we’ll prove it for an open disk and note that two points in $A$ can be connected by a finite number of overlapping disks and one can show that $f$ is identically zero on each of these disks.

Now for the single disk: here $f(z) = \sum^{\infty}_{k=0} a_k (z-z_0)^k = 0$ for all points on this disk, hence $f(z) = 0$ on this whole disk.

Concept: a zero of an analytic function and its order.

If $f$ is analytic and $f(z_0) = 0$ we say that $f$ has a zero at $z_0$. Because $f$ is analytic we see that $f(z) = \sum^{\infty}_{k=0} a_k (z-z_0)^k$. If all $a_k = 0$ then $f$ is the zero function. Otherwise if $a_0, a_1, ...a_{m-1}$ are all zero but $a_m \neq 0$ we say that $z_0$ is a zero of order m

Example: $f(z) = z^2 + 1$ has zeros of order 1 at $z = \pm i$; $sin(z)$ has zeros of order 1 at $z = \pm k \pi, k \in \{0, 1, 2,... \}$ $g(z) = (z+i)^3(z^2-9)$ has a zero of order 3 at $i$ and zeros of order 1 at $\pm 3$

Now let $f$ have a zero of order $m$ at $z_0$ Then
$f(z) = a_m (z-z_0)^m + a_{m+1} (z-z_0)^{m+1} + ... =(z-z_0)^m \sum^{\infty}_{k=0} a_{m+k}(z-z_0)^k$

Let $g(z) = \sum^{\infty}_{k=0} a_{m+k}(z-z_0)^k$; this is an analytic function that does NOT have a zero at $z_0$.

So $f(z) = (z-z_0)^m g(z)$ where $g(z_0) \neq 0$ and $g$ is analytic. This is an important concept.

Liouville’s Theorem Let $f$ be entire (analytic on the whole complex plane). If $f$ is bounded, then $f$ is the constant function.

Note: this is completely different from real variable functions. For example: $sin(x)$ has a valid power series expansion at $x = 0$ that is valid on the whole real line, and $|sin(x)| \leq 1$.

Proof of Liouville’s Theorem: I’ll present an alternate proof that uses Cauchy’s integral formula for derivatives. Let $f$ be entire and bounded; say $|f(z)| \leq M$ for all $z$. Now for any $z \neq 0$ we have $f'(z) = \frac{1}{2 \pi i} \int_{C_R} \frac{f(w)}{(w-z)^2} dw$ where $C_R$ is $|w| = R$ which is large enough to enclose $z$ Note: $R$ can be as large as desired.

Now $|f'(z)| = \frac{1}{2 \pi} |\int_{C_R} \frac{f(w)}{(w-z)^2} dw| \leq \frac{1}{2 \pi} \frac{M}{(R-|z|)^2} (2 \pi R)$ (the maximum modulus of the function being integrated times the arc length) so $|f'(z)| \leq \frac{MR}{(R-|z|)^2}$ for ALL $R > 0$. Let $R \rightarrow \infty$ and get that $|f'(z)| = 0$ for ANY $z$. Hence $f'(z) = 0$ for all $z$ and therefore $f$ is constant.

Now we present the book’s proof which does not use the Cauchy integral formula.

let $F(z) = a_0 + a_1 z + a_2 z^2 + a_3 z^3....$ and let $g(z) = a_1 + a_2 z + a_3 z^2 +..$ (think: $g(z) = \frac{F(z) - F(0)}{z}$ for $z \neq 0$ and $g(0) = a_1$ )

Note $|g(w)| \leq \frac{|F(z)-F(0)|}{|z|} \rightarrow |g(Re^{it})| \leq \frac{M+M}{R} = \frac{2M}{R}$

So $g(z) = \frac{1}{2 \pi i} \int_{C_R} \frac{g(w)}{w-z} dw$ and so $|g(z)| =\frac{1}{2\pi}|\int_{C_R} \frac{g(w)}{w-z} dw| \leq \frac{1}{2 \pi} \frac{2M}{R} \frac{1}{(|R-|z|)} 2 \pi R = \frac{2M}{R-|z|}$ which goes to zero as $R \rightarrow \infty$.

We now go to the Fundamental Theorem of Algebra.

Let $P(z)$ be any polynomial with complex coefficients. Then $P(z)$ has at least one root.

Proof: suppose $P$ has no roots, then $\frac{1}{P(z)}$ is an entire function. Note also: if $P(z) = a_0 + a_1z + a_2 z^2 + ...a_n z^n, |P(z)| \geq |a_n||z^n| - |a_{n-1}||z^{n-1}| - ..-|a_0|$. Now let $R$ be so large that $|a_n| R^n > |a_{n-1}||R^{n-1}| + ..+|a_0|$. Now let $M = min\{|P(z)|, z \in \{|z| < R \}$ then we have $\frac{1}{|P(z)|} \leq max \{M, \frac{1}{|a_n||R^n| - |a_{n-1}||R^{n-1}| - ..-|a_0|} \}$. Note the second term in the “max” goes to 0 as $R$ goes to infinity. This means that $\frac{1}{P(z)}$ is a bounded, entire function and therefore constant. That is only possible if $P$ is the constant polynomial. So ALL non-constant polynomials with complex coefficients have complex roots.

We can say a bit more: let $z_1$ be a root of $P$, say of order $m_1$. Then $P(z) = (z-z_1)^{m_1} g_1(z)$ where $g_1(z)$ is analytic and does NOT have a zero of $z_1$. Note that $g_1(z)$ is a polynomial too. So if $g_1(z)$ is not constant, we can find a root $z_2$ of order $m_2$ and proceed:

$P(z) = (z-z_1)^{m_1}(z-z_2)^{m_2}....(z-z_k)^{m_k} g_{k+1}$ where $g_{k+1}$ is a constant. By the division algorithm, we note that $m_1 +m_2 +...+m_k = deg(P)$

Again, this is FALSE for polynomials with real coefficients; $x^2+ 1$ does not factor over the reals.

# Cauchy integral formula; power series for analytic functions, etc.

We have two goals in mind here: the first one is to show the following:

Let $f$ be analytic on some connected open set $A$ and $C$ some simple closed curve in $A$ (oriented in the standard direction; as usual, assume that the curve is piecewise smooth). Since this course is just a first pass at complex variables, we assume that $f$ has a continuous derivative on $A$ (that is not a necessary assumption, but good enough for us). Let $z$ be any point in the interior of the region bounded by $C$. Then the following holds:

1. $f$ has derivatives of all orders and $f^{k} = \frac{k!}{2 \pi i} \int_C \frac{f(w)}{(w-z)^{k+1}} dw$ for $k \in \{1,2,3,... \}$ (you can remember the formula by differentiating with respect to $z$ under the integral sign.

2. If $z_0 \in A$ then $f(z) = \sum^{\infty}_{k=0} a_k (z-z_0)^k$ on some disk of convergence that lies in $A$ where $a_k = \frac{f^{k}(z_0)}{k!}$ (Taylor’s formula).

Now there are several approaches to doing this. One CAN prove the derivative formula directly and there is some merit to doing it that way. But if we follow our text, we’ll get the series expansion first and then the derivative formulas follow very easily.

We will assume the uniqueness of the power series expansion and that the power series can be differentiated term by term.

First result: let $f = \sum^{\infty}_{k=0} a_k (z-z_0)^k$ on some open disk of convergence. (that is, let $f$ have a power series expansion). Then $a_k = \frac{f^{k}(z_0)}{k!}$.

That is, IF $f$ has a power series expansion, it has to be the Taylor one.

Proof: $f(z_0) = a_0$ (because the higher order terms are all zero). Now $f'(z) = a_1 + 2a_2(z-z_0) + 3a_3(z-z_0)^2 + ....$. Substitute $z=z_0$ to obtain $a_1 = f'(z_0)$.

Differentiate again: $f^{(2)}(z) = 2a_2 + 3*2a_3(z-z_0) + 4*3a_4(z-z_0)^2 +...$ then $f^{(2)} (z_0) = 2a_2$ and we can continue on in a similar manner (technically, this is an easy induction argument).

Therefore the coefficients are what we said they were…PROVIDED the power series expansion exists to begin with, which we have yet to show.

I am going to take things in a different order than the text does..just slightly.

For now, let $a_k = \frac{1}{2 \pi} \int_C \frac{f(w)}{(w-z_0)^{k+1}} dw$ and we will show that $\sum^{\infty}_{k=0} a_k (z-z_0)^k$ converges uniformly on some open disk containing $z_0$ and $z$. That will justify “term by term” integration that we will do.

The text does this in a slightly setting: it breaks down the series to a finite part (where one can safely interchange integration and summation…the sum of the integrals is the integral of the sums) and an infinite tail which can be made as small as we’d like. In the limit, we get the infinite series with an error term that goes to zero.

Now, back to our approach.

By our theory of cuts, we can assume that the curve of our integral is a small circle about $z_0$ and $z$ is another point in the open disk bounded by the circle; remember that the points $w$ now lie on that circle and are all a distance of $r$ away from $z_0$ and that $|z-z_0| = d < r$

Now note that on our new $C$, $|\int_C \frac{f(w)}{(w-z_0)^{k+1}} dw | \leq 2 \pi r M \frac{1}{r^k}$ where $M = Max \{ |\frac{f(w)}{(w-z_0)}|, w \in C \}$ which has a maximum since $C$ is a closed curve and $\frac{f(w)}{(w-z_0)}$ is continuous there.

Now for each $k \in \{1, 2, ....\}$ we find that $|a_k(z-z_0)^k | \leq 2 \pi r M (\frac{d}{r})^k$ and $\frac{d}{r}$ is some fixed number less than 1. As a consequence, $\sum^{\infty}_{k = 0} |a_k||(z-z_0)^k| \leq 2 \pi r M \sum^{\infty}_{k = 0} |\frac{d}{r}|^k$ and the latter is an absolutely convergent geometric series. Hence the series $\sum^{\infty}_{k = 0} a_k(z-z_0)^k$ is absolutely and uniformly convergent on that small disk.

Yes, if you’ve had real analysis, you recognize the Weierestrass M test.

So, we can safely switch integration and infinite summation here.

Still, we’ve yet to produce the series itself, but we are about to:

We start by realizing that $f(z) = \frac{1}{2 \pi i } \int_C \frac{f(w)}{(w-z)} dw$ and we’ll convert the right hand of the equation into our desired series.

Note $\frac{1}{w-z} = \frac{1}{(w-z_0)-(z-z_0)} = \frac{1}{w-z_0} \frac{1}{1- \frac{z-z_0}{w-z_0}} = \frac{1}{w-z_0} (1 + \frac{z-z_0}{w-z_0} + (\frac{z-z_0}{w-z_0})^2 + (\frac{z-z_0}{w-z_0})^3 +...)$ which is absolutely convergent because $|\frac{z-z_0}{w-z_0}| = \frac{d}{r} < 1$

Now our integral becomes $\frac{1}{2 \pi i } \int_C \sum^{\infty}_{k=0} (z-z_0)^k \frac{1}{(w-z_0)}\frac{f(w)}{(w-z_0)^k} dw = \frac{1}{2 \pi i } \int_C \sum^{\infty}_{k=0} (z-z_0)^k \frac{f(w)}{(w-z_0)^{k+1}} dw$

Now we can safely interchange the summation and integration signs:

$\sum^{\infty}_{k=0} (z-z_0)^k (\frac{1}{2 \pi i } \int_C \frac{f(w)}{(w-z_0)^{k+1}} dw )$ and that gives us our power series expansion of $f$ But we already saw what the power series expansion of $f$ had to be:

$\frac{1}{2 \pi i } \int_C \frac{f(w)}{(w-z_0)^{k+1}} = \frac{f^{(k)}}{k!}$ and multiplying both sides by $k!$ finishes the derivative formulas.

So, there is quite a bit here. With the assumption that $f$ is analytic with a continuous derivative, we find that $f$ has a power series expansion AND we get a nice formula for the derivative of a function.

Now, of course, you don’t compute derivatives in that manner; this is mostly a theoretical result, but it is a very important one.

# Fresnel Integrals

In this case, we want to calculate $\int^{\infty}_0 cos(x^2) dx$ and $\int^{\infty}_0 sin(x^2) dx$. Like the previous example, we will integrate a specific function along a made up curve. Also, we will want one of the integrals to be zero. But unlike the previous example: we will be integrating an analytic function and so the integral along the simple closed curve will be zero. We want one leg of the integral to be zero though.

The function we integrate is $f(z) = e^{iz^2}$. Now along the real axis, $y = 0$ and so $e^{i(x+iy)^2} =e^{ix^2} = cos(x^2) + isin(x^2)$ and $dz = dx$ So the integral along the positive real axis will be the integrals we want with $\int^{\infty}_0 cos(x^2) dx$ being the real part and $\int^{\infty}_0 sin(x^2) dx$ being the imaginary part.

So here is the contour

Now look at the top wedge: $z = te^{i \frac{\pi}{4}}, t \in [0,R]$ (taken in the “negative direction”)

So $z^2 = t^2 e^{i \frac{\pi}{2}} =t^2 (0 + isin(\frac{\pi}{2}) = it^2 \rightarrow e^{iz^2} = e^{i^2t^2} = e^{-t^2}$

We still need $dz = cos(\frac{\pi}{4}) + isin(\frac{\pi}{4}) dt = \frac{\sqrt{2}}{2}(1+i)dt$

So the integral along this line becomes $- \frac{\sqrt{2}}{2}(1+i)\int^{R}_0 e^{-t^2} dt = -\frac{\sqrt{\pi}}{2}\frac{\sqrt{2}}{2}(1+i)$ as $R \rightarrow \infty$. Now IF we can show that, as $R \rightarrow \infty$ the integral along the circular arc goes to zero, we have:

$\frac{\sqrt{\pi}}{2}\frac{\sqrt{2}}{2}(1+i) = \int^{\infty}_0 cos(x^2) dx + i \int^{\infty}_0 sin(x^2) dx$. Now equate real and imaginary parts to obtain:

$\int^{\infty}_0 cos(x^2) dx = \frac{\sqrt{\pi}}{2}\frac{\sqrt{2}}{2} = \frac{\sqrt{2 \pi}}{4}$ and $\int^{\infty}_0 sin(x^2) dx =\frac{\sqrt{\pi}}{2}\frac{\sqrt{2}}{2} = \frac{\sqrt{2 \pi}}{4}$

So let’s set out to do just that: here $z = Re^{it}, t \in [0, \frac{\pi}{4}]$ so $e^{iz^2} = e^{iR^2e^{2it}}e^{iR^2(cos(2t) + isin(2t)} = e^{iR^2(cos(2t)}e^{-R^2sin(2t)}$. We now have $dz = iRe^{it} dt$ so now $|\int^{\frac{\pi}{4}}_0 e^{iR^2(cos(2t)}e^{-R^2sin(2t)}iRe^{it} dt| \leq \int^{\frac{\pi}{4}}_0|e^{iR^2(cos(2t)} || e^{-R^2sin(2t)} |iRe^{it}| dt =$

$\int^{\frac{\pi}{4}}_0| e^{-R^2sin(2t)} R dt$

Now note: for $t \in [0, \frac{\pi}{4}]$ we have $sin(2t) \geq \frac{2}{\pi}t$

$\rightarrow e^{-R^2 sin(2t)} \leq e^{-R^2\frac{2}{\pi}t}$ hence

$\int^{\frac{\pi}{4}}_0 e^{-R^2sin(2t)} R dt \leq \int^{\frac{\pi}{4}}_0 e^{-R^2frac{2}{\pi}t} R dt = -\frac{1}{R}\frac{\pi}{2}(1-e^{-R^2\frac{1}{2}})$ and this goes to zero as $R$ goes to infinity.

# Some real integral calculation

Note to other readers: if you know what a “residue integral” is, this post is too elementary for you.

Recall Cauchy’s Theorem (which we proved in class): if $f$ is analytic on a simply connected open set $A$ and $\gamma$ is some piecewise smooth simple closed curve in $A$ and $z_0$ is in the region enclose by $\gamma$ then $f(z_0) =\frac{1}{2\pi i} \int_{\gamma} \frac{f(w)}{w-z_0} dw$

This is somewhat startling that the integral a related function along the boundary curve of a region determines the value of the function in that said region.

And, this fact, plus this fact: $|\int_{\gamma} f(w)dw | \leq M l(\gamma)$ where $l(\gamma)$ is the length of the curve $\gamma$ and $M = max \{|f(w)|, w \in \gamma \}$ can lead to the solution to integrals of real variable functions.

Here is one collection of examples: let’s try to calculate $\int^{\infty}_{-\infty} \frac{dx}{x^{2a} + 1}$ $a \in \{1,2,3,... \}$

Now consider the curve $\gamma$ which runs from $-R$ to $R$ along the real axis and then from $R$ to $-R$ along the “top semicircle” of $|z| = R$ (positive imaginary part). Denote that curve by $C_r$ See the following figure:

So if we attempt to integrate $\frac{1}{z^{2a} + 1}$ along this contour we get $\int_{-R}^{R} \frac{1}{x^{2a} + 1}dx + \int_{C_r} \frac{1}{z^{2a} + 1} dz$

Now as we take a limit as $R \rightarrow \infty$ the first integral becomes the integral we wish to evaluate. The second integral: remember that $|z|$ constant along that curve and we know that $|z^{2a} + 1| \geq |z^{2a}| - 1 =R^{2a} -1$ along this curve, hence $\frac{1}{z^{2a} + 1} \leq \frac{1}{R^{2a}-1}$ along $C_r$ (assuming $R > 1$)

So $|\int_{C_r} \frac{1}{z^{2a} + 1} dz| \leq \frac{1}{R^{2a} -1} R \pi$ because $l(C_r) = \pi R$ (think: magnitude of the integrand times the arc length).

Now $lim_{R \rightarrow \infty} \frac{1}{R^{2a} -1} R \pi =0$ (provided, of course, that $a \in \{1, 2,...\}$. So as $R$ goes to infinity, the integral around the entire curve becomes the integral along the real axis, which is the integral that we are attempting to calculate. Note that because $2a$ is even, $\frac{1}{x^{2a} + 1}$ is continuous on the whole real line.

This, of course, does not tell us what $\int^{\infty}_{-\infty} \frac{1}{x^{2a} + 1} dx$ is but we can use Cauchy’s theorem to calculate the integral around the whole curve, which is equal to the integral along the entire real axis.

So, in order to calculate the integral along the curve, we have to deal with where $\frac{1}{z^{2a} + 1}$ is NOT analytic. This means finding the roots of the denominator: $z^{2a} + 1$ that lie in the upper half plane (and are therefore contained within the curve when $R$ is large enough). There will be $a$ of these in the upper half plane.

Label these roots $w_1, w_2,...w_a$. Now draw small circles $C_1, C_2, ..C_a$ around each of these ..the circles are small enough to contain exactly ONE $w_k$. Within each of the circles, $\frac{1}{z^{2a}+1 }$ is analytic EXCEPT at that said root.

Now here is the key: for each root $w_k$, write $z^{2a} + 1 = (z-w_k)(p_k(z) )$ where $p_k(z) = \frac{z^{2a} + 1}{z-w_k}$ Then for all $k$, $\int_{C_k} \frac{1}{z^{2a} + 1} dz = \int_{C_k} \frac{\frac{1}{p_k(z)}}{z-w_k }dz = 2 \pi i \frac{1}{p_k(w_k)}$ by Cauchy’s Theorem ($\frac{1}{p_k(z)}$ is analytic within $C_k$ as we divided out the root within that region).

Now by using the method of cuts, the integral around the large curve $\gamma$ is just the sum of the integrals along the smaller circles around the roots. This figure is the one for $\frac{1}{z^4 + 1}$.

So, putting it all together:

$\int^{\infty}_{-\infty} \frac{1}{x^{2a} + 1} dx = (2 \pi i)(\frac{1}{p_1(w_1)} + \frac{1}{p_2(w_2)} +...+ \frac{1}{p_a(w_a)})$

And YES, the $i$ always cancels out so we do get a real valued answer.

I admit that calculation of $p_k(w_k)$ can get a bit tedious but conceptually it isn’t hard.

Let’s do this for $2a= 2$ and again for $2a = 4$.

For $\int^{\infty}_{-\infty} \frac{1}{x^2+1} dx$ note that $p_1(z) = \frac{z^2+1}{z-i} = z+i$ and so the integral is $2 \pi i (\frac{1}{i+i}) = 2 \pi \frac{1}{2} = \pi$ as expected (you can do this one with calculus techniques; use $\int \frac{1}{x^2+1} dx = arctan(x) + C$)

Now for $\int^{\infty}_{-\infty} \frac{1}{x^4+1} dx$

Label the roots $w_1 = \frac{\sqrt{2}}{2}(1+i), w_2 = \frac{\sqrt{2}}{2}(-1+i), w_3 = \frac{\sqrt{2}}{2}(-1-i), w_4 = \frac{\sqrt{2}}{2}(1-i)$

So $z^4+1 = (z-w_1)(z-w_2)(z-w_3)(z-w_4) \rightarrow$

$p_1(w_1) = (w_1-w_2)(w_1-w_3)(w_1-w_4), p_2(w_2) = (w_2-w_1)(w_2-w_3)(w_2-w_4)$

So $\int^{\infty}_{-\infty} \frac{1}{x^4+1} dx = 2 \pi i(\frac{1}{p_1(w_1)} + \frac{1}{p_2(w_2)}) =$

$2 \pi i (\frac{1}{(\sqrt{2})^3})(\frac{1}{(1)(1+i)(i)} + \frac{1}{(-1)(i)(-1+i)} = \frac{\pi}{\sqrt{2}} i \frac{1}{i}(\frac{1}{1+i} - \frac{1}{-1+i})$

$= \frac{\pi}{\sqrt{2}}\frac{-1+i -( 1+i)}{(1+i)(-1+i)} = \frac{\pi}{\sqrt{2}}\frac{-2}{-2} = \frac{\pi}{\sqrt{2}}$

# A summary of some integral theorems

This post will contain no proofs but rather, statements of theorems that we will use. Note: all curves, unless stated otherwise, will be piecewise smooth and taken in the standard direction.

1. Given $f$ analytic on some open domain $D$ and $\gamma$ a simple closed curve in $D$ whose bounded region is also in $D$. Then $\int_{\gamma}f(w)dw = 0$

Note: the curve in question is a simple closed curve whose bounded region is in a domain where $f$ is analytic.

So, we note that $f(w) = \frac{1}{w}$ is analytic in the open annulus $A= \{z| 1 < |z| < 3 \}$ and the curve $|z|=2$ lies in $A$ but $\int_{|z|=2} \frac{1}{w} dw = 2 \pi i \neq 0$. The reason this does not violate this result is that the region bounded by the curve is $\{z| |z| < 2 \}$ is NOT contained in $A$.

2. If $f$ is analytic within a simply connected open domain $D$ and $\gamma$ is a closed curve (not necessarily a simple closed curve; $\gamma$ might have self intersections). then $\int_{\gamma} f(w)dw = 0$. Note that this result follows from a careful application of 1. This also shows that $\gamma, \alpha$ two paths connecting, say, $w_0$ to $z_0$ in $D$ then $\int_{\gamma}f(w)dw = \int_{\alpha} f(w) dw$. That is, the integrals are path independent.

Why this is useful: suppose $f$ is NOT analytic at, say, $w_0$ but is analytic everywhere else in some open domain $D$ which contains $w_0$. Now let $\gamma, \alpha$ be two disjoint simple closed curves whose bounded regions contain $w_0$. Then $\int_{\gamma} f(w)dw = \int_{\alpha} f(w) dw$ even though the integrals might not be zero.

The curve formed by connecting $\gamma$ to $\alpha$ by a cut line (in green..and going backwards on $\alpha$ is NOT a simple closed curve, but it is a piecewise smooth closed curve which bounds a simply connected region which excludes the point where $f$ is not analytic; hence the integral along this curve IS zero. So by subtracting off the integral along the cut lines (you go in opposite directions) yields the equality of the integrals.

3. If $f$ is analytic on a simply connected domain, then $f$ has a primitive there (aka: “anti derivative”). That is, there is some $F$ where $F' =f$ on that said domain.

Note: you need “simply connected” here as $\frac{1}{z}$ is analytic on $C - \{0\}$ but has no primitive ON $C - \{0\}$.
But $\frac{1}{z}$ does have a primitive on, say, $\{z| Im(z) > 0 \}$ ($Log(z)$ is one such primitive)

4. If $f$ has a primitive on an open domain and $\gamma$ is a closed curve on that domain, then $\int_{\gamma} f(w)dw = 0$
this follows from our “evaluation of an integral by a primitive” theorem. And note: the domain does NOT have to be simply connected.

Example: $\frac{1}{z^2}$ has a primitive on $C - \{0\}$ so if $\gamma$ is any closed curve that does not run through the origin, $\int_{\gamma} \frac{1}{z^2} dz = 0$. But this does NOT work for $\frac{1}{z}$ as the candidate for a primitive is a branch of the log function, which must have a discontinuities on some infinite ray (possibly not straight) whose endpoint is on the origin.