Laurent Series

We’ve established that if f has an isolated singularity at z_0 and that f has a pole of order m at z_0 then f(z)  = \sum^{m}_{j=1} b_j\frac{1}{(z-z_0)^j} + \sum^{\infty}_{k=0} c_k(z-z_0)^k where the coefficients are as follows: c_k = \frac{1}{2 \pi i} \int_{\gamma} \frac{f(w)}{(w-z_0)^{k+1}} dw and b_j = \frac{1}{2 \pi i} \int_{\gamma} f(w)(w-z_0)^{j-1} dw where \gamma is some circle taken once in the standard direction that encloses z_0 but no other singular point of f .

So here is what we will prove: if there exits R > r  where f is analytic on the set R > |z-z_0| > r then f has a Laurent series expansion about z_0 valid for all z in this set. Now IF z_0 is an singular point and our set is a punctured disk (r = 0 ) then the singularity is essential, if and only if the principal part (the part of the Laurent series with the negative powers of (z-z_0) has an infinite number of terms, and we have NOT proved that such a series exists in this case. We will remedy this now.

In this diagram: the (possible) singularity is at z_0 , (this construction does not require that z_0 be a singular point) the large \Gamma is a circle between z_0 and another singularity (that is why this works for isolated singularities), z is some point enclosed by \Gamma where we want to evaluate f and small \gamma is a circle between z and z_0. The green circle about z is one between \Gamma and \gamma and note that f is analytic inside and just outside of this circle, so Cauchy’s Theorem applies.

Let G denote the small circle enclosing z taken half way around in the standard direction and note that f(z) = \frac{1}{2 \pi i} \int_G \frac{f(w)}{w-z} dw . But by the theory of cuts (integrate along the \Gamma in the standard direction, along one cut line, half way around G in the opposite direction, once around \gamma in the opposite direction, half around G in the opposite direction, and back along the cut line to \Gamma we see that -\int_G \int_G \frac{f(w)}{w-z} dw + \int_{\Gamma}  \frac{f(w)}{w-z} dw - \int_{\gamma} \frac{f(w)}{w-z} dw =0 as together, these paths bound a simply connected region where f is analytic (and the integrals along the cut lines cancel out).

So we get 2 \pi i f(z) = \int_G \frac{f(w)}{w-z} dw = \int_{\Gamma}  \frac{f(w)}{w-z} dw - \int_{\gamma} \frac{f(w)}{w-z} dw We switch the sign on the second integral:

2 \pi i f(z) = \int_G \frac{f(w)}{w-z} dw = \int_{\Gamma}  \frac{f(w)}{w-z} dw + \int_{\gamma} \frac{f(w)}{z-w} dw

We now make the following observations: in the first integral (along \Gamma ) we note that |w-z_0| > |z-z_0| and in the second integral (along \gamma ) |z-z_0| > |w-z_0|

Now look at the first integral..in particular the fraction f(w) \frac{1}{w-z} = f(w) \frac{1}{w-z_0 -(z-z_0)} = f(w) \frac{1}{w-z_0} \frac{1}{1-\frac{z-z_0}{w-z_0}}

= f(w) \frac{1}{w-z_0} (1 + \frac{z-z_0}{w-z_0} + (\frac{z-z_0}{w-z_0})^2 + (\frac{z-z_0}{w-z_0})^3...) because we are in the region where the geometric series converges absolutely.

So our integral becomes \int_{\Gamma}  \frac{f(w)}{w-z} dw = \int_{\Gamma} f(w) \sum^{\infty}_{k=0} \frac{(z-z_0)^k}{(w-z_0)^{k+1}}dw . In previous work we’ve shown that we can interchange integration and summation in this case so, we end up with : \sum^{\infty}_{k=0} (\int_{\Gamma} \frac{f(w)}{(w-z_0)^{k+1}} dw)(z-z_0)^k which yields the regular part of the Laurent series.

We know turn to what will become the principal part: the second integral.

\int_{\gamma} \frac{f(w)}{z-w} dw . Let’s focus on the fraction \frac{1}{z-w}

Now we write \frac{1}{z-w} = \frac{1}{z-z_0-(w-z_0)} = \frac{1}{z-z_0}\frac{1}{1-\frac{w-z_0}{z-z_0}} and recall that, on \gamma , |\frac{w-z_0}{z-z_0}| < 1 so we can expand this into an infinite series (bounded by a convergent geometric series)

\frac{1}{z-z_0}\frac{1}{1-\frac{w-z_0}{z-z_0}}  = \frac{1}{z-z_0} (1 + \frac{w-z_0}{z-z_0} + (\frac{w-z_0}{z-z_0})^2 + (\frac{w-z_0}{z-z_0})^3....

So now going back to the integral

\int_{\gamma} \frac{f(w)}{z-w} dw = \int_{\gamma} f(w) \sum^{\infty}_{k=0} \frac{(w-z_0)^k}{(z-z_0)^{k+1}} and, once again, because the series is bounded by a convergent geometric series (details similar to those we used in developing the power series) we can interchange summation and integration to obtain

\sum^{\infty}_{k=0} (\int_{\gamma} f(w)(w-z_0)^k dw ) \frac{1}{(z-z_0)^{k+1}} . It is traditional to shift the index to write it as:

\sum^{\infty}_{k=1} (\int_{\gamma} f(w)(w-z_0)^{k-1} dw ) \frac{1}{(z-z_0)^{k}}

So, adding the two series together we have:

2 \pi i f(z) =

\sum^{\infty}_{k=1} (\int_{\gamma} f(w)(w-z_0)^{k-1} dw ) \frac{1}{(z-z_0)^{k}} + \sum^{\infty}_{k=0} (\int_{\Gamma} \frac{f(w)}{(w-z_0)^{k+1}} dw)(z-z_0)^k

Now divide both sides by 2 \pi i to obtain the desired result.

One caveat To define a Laurent series, one needs to know: the center about which one is expanding about and the annulus or disk of convergence. The derivation that we used above does not require z_0 to be a singularity or for there to be only 1 singularity inside of \gamma ; we just require that f be analytic on some open region containing the annulus with boundary circles \Gamma \cup \gamma

Actually obtaining the Laurent series

I did NOT show (but it is true) that Laurent series are unique (we did that for power series), but the general principle still applies. Once you get a Laurent series for a function about a singularity (often valid on some punctured disk about a singularity), you have them all. But if you looked at the proof, what was required is that f be analytic in the region bounded by \Gamma and \gamma (an open annulus where the smaller boundary circle encloses the singularity in question).

So our Laurent series is valid on an punctured disk or on an annulus..with the later being true if, say, there were two singularities.

Now as far as obtaining: we do that by “hook or crook” and almost never by actually calculating those dreadful integrals.

1) sin(\frac{1}{z}) has an isolated essential singularity at z=0 and its Laurent series can be obtained by substituting \frac{1}{z} for z in the usual Taylor series:

\sum^{\infty}_{k=1} (-1)^{k+1} \frac{1}{(2k-1)!z^{2k-1}} = \frac{1}{z} - \frac{1}{3!z^3} + \frac{1}{5!z^5} ...

2) Now let’s look at something like \frac{1}{1+z^2} Here there are two isolated poles of order 1: i and -i. So the series, we get, will depend on where we expand from AND where we want the expansion to be valid.

Two of the “obvious” expansion points would be i and -i and we’d expect a radius of validity to be 2 for each of these. We could also expand about, say, zero and expect a power series expansion to have radius 1. Now if we want a series to be, say, centered about 0 and be valid for a radius that is GREATER than 1, we can look at the infinite annulus |z| > 1 whose inner boundary circle encloses both singularities.

So let us start:

1) about z_0 = 0 : here write \frac{1}{1+z^2}  = \frac{1}{1- (-z^2)} = 1 -z^2 + z^4-z^6...=\sum^{k=\infty}_k=0 (-1)^kz^{2k} . Because f is analytic on |z| < 1 this is a power series with no principal part.

2) about z = i One way to do it is to revert to partial fractions:

\frac{1}{1+z^2} = \frac{1}{(z-i)(z+i)} = \frac{A}{z-i} + \frac{B}{z+i} \rightarrow

Az+Ai + Bz-Bi = 1 \rightarrow A+B=0, A-B = -i \rightarrow 2A = -i \rightarrow A =-\frac{i}{2}, B = \frac{i}{2}

So \frac{1}{1+z^2} = \frac{-i}{2} \frac{1}{z-i} + \frac{i}{2} \frac{1}{z+i}. Note the second sum is analytic near z_0 = i so we can write a power series expansion:

\frac{i}{2} \frac{1}{z-i+2i} = \frac{i}{2} \frac{1}{2i} \frac{1}{1+ \frac{z-i}{2i}} = \frac{1}{4} \sum^{\infty}_{k=0} (\frac{1}{-2i})^k (z-i)^k  =\frac{1}{4} \sum^{\infty}_{k=0}(\frac{i}{2})^k (z-i)^k  which is the regular part of the Laurent series.

The total series is

\frac{-i}{2} \frac{1}{z-i} + \sum^{\infty}_{k=0}(\frac{i}{2})^k (z-i)^k and the principal part has only one non-zero term, as expected.

The punctured disk of convergence has radius 2, as expected. I’ll leave it as an exercise to find the Laurent series about z_0 = -i but the result will look very similar.

Now for the Laurent series centered at z_0 = 0 but valid for |z| > 1 . This will have an infinite number of terms with negative exponent but…this series is NOT centered at either singularity.

\frac{1}{1+z^2} = \frac{i}{2}(\frac{1}{i+z} -\frac{1}{i-z}) = \frac{i}{2}\frac{1}{z}(\frac{1}{1+\frac{i}{z}} -\frac{1}{1-\frac{i}{z}}) =\frac{i}{2}\frac{1}{z} (\sum^{\infty}_{k=0} ((-1)^k(\frac{i}{z})^k -(\frac{i}{z})^k)

= \frac{i}{2}\frac{1}{z} (\sum^{\infty}_{k=1} ((-1) 2 (\frac{i}{z})^{2k-1}) (the even powers cancel each other)

=  \sum^{\infty}_{k=1} ((-1)(i)^{2k}  (\frac{1}{z})^{2k} =\sum^{\infty}_{k=1}(-1)^{k+1}\frac{1}{z^{2k}}

Which equals \frac{1}{z^2} - \frac{1}{z^4} + \frac{1}{z^6} ....

Try graphing \frac{1}{1+x^2} versus \frac{1}{x^2} - \frac{1}{x^4} + \frac{1}{x^6}... up to, say the 14’th power, 16’th power…22’nd power, etc. on the range, say, [1, 4] . That gives you an idea of the convergence.

Poles and zeros

We know what a zero of a function is and what a zero of an analytic function is: if f is analytic at point z_0 for which f(z_0) = 0 then f(z) = (z-z_0)^m g(z) where g(z_0) \neq 0 and g is analytic..and this decomposition is unique (via the uniqueness of a power series). m is the order of the zero.

What I did not tell you is that a zero of a NON-CONSTANT analytic function is isolated; that is, there is some r > 0 such that: if A = \{w| |w-z_0| < r \} is the open disk of radius r about z_0 and w \in A then f(w) \neq 0 . That is, a zero of an analytic function can be isolated from other zeros.

Here is why: Suppose not; then there is some sequence of z_k \rightarrow z_0 where f(z_k) = 0 (how to construct: choose a zero z_1 to be closer than 1 to z_0 and then let z_2 be a zero of f that is less than half the distance between z_2 and z_0 and keep repeating this process. It cannot terminate because if it does, it means that the zeros are isolated from z_0 .

Note that for k \neq 0 we have f(z_k) = (z_k-z_0)^m g(z_k) = 0 which implies that g(z_k) = 0 . But g is analytic and since g(z_k) = 0 for all k then g(z_k) \rightarrow g(z_0) = 0 which contradicts the fact that g(z_0) \neq 0 . So, the zeros of an analytic function are isolated.

In fact, we can say even more (that an analytic function is a “discrete mapping”..that is, roughly speaking, takes a discrete set to a discrete set) but we’ll leave that alone, at least for now. See: Discrete Mapping Theorem in Bruce Palka’s complex variables book

But for now, we’ll just stick with this.

Note: in real analysis, there is the concept of a “bump function”: one which is zero on some interval (or region), not zero off of that region and is infinitely smooth (has derivatives of all orders). This cannot happen with analytic complex functions.

Now onto poles.

A singularity of a complex function is a point at which the complex function is not analytic. A singularity is isolated if one can put an open disk around it on which the function is analytic EXCEPT at the singularity.

More formally: say f has an isolated singularity if f is NOT analytic at z_0 but f is analytic on some set 0 < |z-z_0 | < R for some R > 0 . (these sets are referred to as “punctured disks” or “deleted disks”. )

Example: \frac{1}{z} has an isolated singularity at z = 0 . sec(z) has isolated singularities at \frac{pi}{2} \pm k \pi, k \in \{0, 1, 2,...\}

Example: \frac{1}{z^n - 1} has isolated singularities at w_k, k \in \{1, 2, ...n \} where w_k is one of the n-th roots of unity. e^{\frac{1}{z}} has an isolated singularity at z = 0

Example: f(z) = \overline{z} is not analytic anywhere an therefore has no isolated singularities.
Example: f(z) = \frac{1}{sin(\frac{1}{z})} has isolated singularities at z = \frac{1}{\pm k \pi}, k \in \{1, 2, ..\} and has a non-isolated singularity at z = 0 . Log(z) has non isolated singularities at the origin and the negative real axis.

We will deal with isolated singularities.

There are three types:

1. Removable or “inessential”. This is the case where f is technically not analytic at z_0 but lim_{z \rightarrow z_0}f(z) exists. Think of functions like \frac{sin(z)}{z}, \frac{e^z-1}{z}, etc. It is easy to see what the limits are; just write out the power series centered at z = 0 and do the algebra.

What we do here is to say: if lim_{z \rightarrow z_0} f(z) = l then let g(z) = f(z), z \neq z_0, g(z_0) = l and note that g is analytic. So, it does no harm to all but ignore the inessential singularities.

2. Essential singularities: these are weird objects. Here is what I will say: f has an essential singularity at z_0 if z_0 is neither removable nor a pole. Of course, I need to tell you what a pole is.

Why these are weird: if f has an essential singularity at z_0 and A is ANY punctured disk containing z_0 in its center but containing no other singularities, and w is ANY complex number (with at most exception), then there exits z_w \in A where f(z_w) = w . This is startling; this basically means that f maps every punctured disk around an essential singularity to the entire complex plane, possibly minus one point. This is the Great Picard’s Theorem. (we will prove a small version of this; not the full thing).

Example: f(z) = e^{\frac{1}{z}} has an essential singularity at z = 0 . If this seems like an opaque claim, it will be crystal clear when we study Laurent series, which are basically power series, but possibly with terms with negative powers.

3. Poles (what we will spend our time with). If f is not analytic at z_0 but there is some positive integer m such that lim_{z \rightarrow z_0} (z-z_0)^m f(z) exists, then we say that f has pole of order m at z_0 .

“Easy examples”: \frac{1}{z} has a pole of order 1 at the origin (called a “simple pole”). \frac{1}{(z^2+1)(z-2i)^2} has simple poles at \pm i and a pole of order 2 (called a “double pole”) at 2i \frac{sin(z)}{z^3} has a pole of order 2 at the origin. (if that seems like a strange claim, write out the power series for sin(z) then do the division.)

Relation to zeros:

Fact: if f is analytic and has a zero of order m at z_0 , then \frac{1}{f} has a pole of order m at z_0 .

Note: because zeros of analytic functions are isolated, the singularity of \frac{1}{f} is isolated. Now:

\frac{(z-z_0)^m}{(z-z_0)^m f(z)} is analytic at z_0 (no zero in the denominator any longer). Note that m is the smallest integer that works because any smaller integer would leave a zero in the denominator.

Fact: if f has a pole of order m at z_0 then \frac{1}{f} has a zero of order m at z_0 .

Reason: Let g(z) = (z-z_0)^m f(z) be the associated analytic function (with m being the smallest integer that works ). Since this is the smallest m that works, we can assume that g(z_0) \neq 0 (look at the power series for g; if the first term has (z-z_0) to a non zero power, use a lower power of m )

Now (z-z_0)^m \frac{1}{g(z)} = \frac{1}{f(z)} is zero at z = z_0 (denominator is not zero).

So the poles and zeros of an analytic function are very closely linked; they are basically the duals of each other. The calculus intuition of “check for zeros in the denominator” works very well here.

Onward to Laurent series and residues

Start with poles or order m : If f has a pole of order m at z_0 then we know (z-z_0)^m f(z) = g(z) is analytic on some open disk of convergence about z_0 .

So we can write g(z) = a_0 + a_1(z-z_0) +a_2(z-z_0)^2 +....= \sum^{\infty}_{k=0}a_k (z-z_0)^k .

So (z-z_0)^m f(z) = a_0 + a_1(z-z_0) +a_2(z-z_0)^2 +....= \sum^{\infty}_{k=0}a_k (z-z_0)^k .

Now divide both sized by (z-z_0)^m and look at what happens to the series:

f(z) = a_0(z-z_0)^{-m} + a_1(z-z_0)^{-m+1} .....+a_{m-1}(z-z_0)^{-1} + a_{m} + a_{m+1}(z-z_0) + a_{m+2}(z-z_0)^2 + ..

So while f does not have a power series centered at z_0 it does have a series of a sort. Such a series is called a Laurent series. It is traditional to write:

f(z) = \sum^{\infty}_{j=1} b_j (z-z_0)^{-j} + \sum^{\infty}_{k=0} a_k (z-z_0)^k . Of course, for a pole of order m , at most m terms of the first series will have non-zero coefficients. For an essential singularity, an infinite number of the coefficients will be non-zero. We will see that the first series yields a function that is analytic for \{ w | |w-z_0| > r \} for some r >0 and the second series, a power series, is analytic within some open disk of convergence (as usual).

Terminology: \sum^{\infty}_{j=1} b_j (z-z_0)^{-j} is called the principal part of the Laurent series, and \sum^{\infty}_{k=0} a_k (z-z_0)^k is called the regular part.

b_1 (the coefficient of the \frac{1}{z-z_0} is called the residue of f at z_0

Why this is important: Let \gamma be a small circle that encloses z_0 but no other singularities. Then \int_{\gamma} f(z)dz = 2 \pi i b_1 = 2 \pi i Res(f, z_0) . This is the famous Residue Theorem and this arises from simple term by term integration of the Laurent series. For a pole it is easy: the integral of the regular part is zero since the regular part is an analytic function, so we need only integrate around the terms with negative powers and there are only a finite number of these.

Each b_k \frac{1}{(z-z_0)^k} has a primitive EXCEPT for b_1 \frac{1}{z-z_0} so each of these integrals are zero as well. So ONLY THE b_1 term matters, with regards to integrating around a closed loop!

The proof is not quite as straight forward if the singularity is essential, though the result still holds. For example:

e^{\frac{1}{z}} = 1 + \frac{1!}{z} + \frac{1}{2!z^2} + \frac{1}{3!z^3} + ..+ \frac{1}{k!z^k} +... but the result still holds; we just have to be a bit more careful about justifying term by term integration.

So, if f is at all reasonable (only isolated singularities), then integrating f along a closed curve amounts to finding the residues within the curve (and having the curve avoid the singularities, of course), adding them up and multiplying by 2 \pi i . Note: this ONLY applies to f with isolated singularities; for other functions (say f(z) = \overline{z} ) we have to grit our teeth a parameterize the curve the old fashioned way.

Now FINDING those residues can be easy, or at times, difficult. Stay tuned.