Onward to the argument principle and Rouche’s Theorem

Consider the function f(z) = z^3 and look at what it does to values on the circle z(t) = e^{it}, t \in [0, 2\pi) . f(e^{it}) = e^{i3t} and as we go around the circle once, we see that arg(f(z)) = arg(e^{i3t}) ranges from 0 to 6 \pi . Note also that f has a zero of order 3 at the origin and (2 \pi)3 = 6 \pi That is NOT a coincidence.

Now consider the function g(z) = \frac{1}{z^3} and look at what it does to values on the circle z(t) = e^{it}, t \in [0, 2\pi) . g(e^{it}) = e^{-i3t} and as we go around the circle once, we see that arg(g(z)) = arg(e^{-i3t}) ranges from 0 to -6 \pi . And here, g has a pole of order 3 at the origin. This too, is not a coincidence.

We can formalize this somewhat: in the first case, suppose we let \gamma be the unit circle taken once around in the standard direction and let’s calculate:

\int_{\gamma} \frac{f'(z)}{f(z)} dz = \int_{\gamma}\frac{3z^2}{z^3}dz = 3 \int_{\gamma} \frac{1}{z}dz = 6 \pi i

In the second case: \int_{\gamma} \frac{g'(z)}{g(z)} dz = \int_{\gamma}\frac{-3z^{-4}}{z^{-3}}dz = -3 \int_{\gamma} \frac{1}{z}dz = -6 \pi i

Here is what is going on: you might have been tempted to think \int_{\gamma} \frac{f'(z)}{f(z)} dz = Log(f(z))|_{\gamma} = (ln|f(z)| +iArg(f(z)) )|_{\gamma} and this almost works; remember that Arg(z) switches values abruptly along a ray from the origin that follows the negative real axis..and any version of the argument function must do so from SOME ray from the origin. The real part of the integral (the ln(|f(z)| part) cancels out when one goes around the curve. The argument part (the imaginary part) does not; in fact we pick up a value of 2 \pi i every time we cross that given ray from the origin and in the case of f(z) = z^3 we cross that ray 3 times.

This is the argument principle in action.

Now of course, this principle can work in the vicinity of any isolated singularity or zero or along a curve that avoids singularities and zeros but encloses a finite number of them. The mathematical machinery we develop will help us with this concept.

So, let’s suppose that f has a zero of order m at z = z_0 . This means that f(z) = (z-z_0)^m g(z) where g(z_0) \neq 0 and g is analytic on some open disk about z_0 .

Now calculate: \frac{f'(z)}{f(z)} dz = \frac{m(z-z_0)^{m-1} g(z) - (z-z_0)^m g'(z)}{(z-z_0)^m g(z)} = \frac{m}{z-z_0} + \frac{g'(z)}{g(z)} . Now note that the second term of the sum is an analytic function; hence the Laurent series for \frac{f'(z)}{f(z)} has \frac{m}{z-z_0} as its principal part; hence Res(\frac{f'(z)}{f(z)}, z_0) = m

Now suppose that f has a pole of order m at z_0 . Then f(z) =\frac{1}{h(z)} where h(z) has a zero of order m . So as before write f(z) = \frac{1}{(z-z_0)^m g(z)} = (z-z_0)^{-m}(g(z))^{-1} where g is analytic and g(z_0) \neq 0 . Now f'(z) = -m(z-z_0)^{-m-1}g(z) -(g(z))^{-2}g'(z)(z-z_0)^{-m} and
\frac{f'(z)}{f(z)} =\frac{-m}{z-z_0} -  \frac{g'(z)}{g(z)} where the second term is an analytic function. So Res(\frac{f'(z)}{f(z)}, z_0) = -m

This leads to the following result: let f be analytic on some open set containing a piecewise smooth simple closed curve \gamma and analytic on the region bounded by the curve as well, except for a finite number of poles. Also suppose that f has no zeros on the curve.

Then \int_{\gamma} \frac{f'(z)}{f(z)} dz = 2 \pi i (\sum^k_{j =1} m_k - \sum^l_{j = 1}n_j ) where m_1, m_2...m_k are the orders of the k zeros of f inside of \gamma and n_1, n_2.., n_l are the orders of the poles of f inside \gamma .

This follows directly from the theory of cuts:

Use of our result: let f(z) = \frac{(z-i)^4(z+2i)^3}{z^2 (z+3i-4)} and let \Gamma be a circle of radius 10 (large enough to enclose all poles and zeros of f . Then \int_{\Gamma} \frac{f'(z)}{f(z)} dz = 2 \pi i (4 + 3 -2-1) = 8 \pi i . Now if \gamma is a circle |z| = 3 we see that \gamma encloses the zeros at i, -2i, and the pole at 0 but not the pole at 4-3i so \int_{\Gamma} \frac{f'(z)}{f(z)} dz = 2 \pi i (4+3 -2) = 10 \pi i

Now this NOT the main use of this result; the main use is to describe the argument principle and to get to Rouche’s Theorem which, in turn, can be used to deduce facts about the zeros of an analytic function.

Argument principle: our discussion about integrating \frac{f'(z)}{f(z)} around a closed curve (assuming that the said curve runs through no zeros of f and encloses a finite number of poles ) shows that, as we traverse the curve, the argument of the function changes by 2 \pi (\text{ no. of zeros - no. of poles}) where the zeros and poles are counted with multiplicities.

Example: consider the function f(z) = z^8 + z^2 + 1 . Let’s find how many zeros it has in the first quadrant.

If we consider the quarter circle of very large radius R (that stays in the first quadrant and is large enough to enclose all first quadrant zeros) and note f(Re^{it})  = R^8e^{i8t}(1+ \frac{1}{R^6}e^{-i6t} + \frac{1}{R^8 e^{i8t}}) we see that the argument changes by about 8(\frac{\pi}{2} = 4 \pi . The function has no roots along the positive real axis. Now setting z = iy to run along the positive imaginary axis we get f(iy) = y^8 -y^2 + 1 which is positive for large R , has one relative minimum at 2^{\frac{-1}{3}}  which yields a positive number, and is zero at z = 0 . So the argument stays at 4 \pi so, we get 4 \pi = 2 \pi (\text{no. of roots in the first quadrant}) which means that we have 2 roots in the first quadrant.

In fact, you can find an online calculator which estimates them here.

Now for Rouche’s Theorem
Here is Rouche’s Theorem: let f, g be analytic on some piecewise smooth closed curve C and on the region that C encloses. Suppose that, on C we have |f(z) + g(z)| < |f(z)| . Then f, g have the same number of zeros inside C . Note: the inequality precludes f from having a zero on C and we can assume that f, g have no common zeros, for if they do, we can “cancel them out” by, say, writing f(z) = (z-z_0)^m f_1(z), g(z) = (z-z_0)^mg_1(z) at the common zeros. So now, on C we have |1 + \frac{g(z)}{f(z)}| < |1| which means that the values of the new function \frac{g(z)}{f(z)} lie within the circle |w+1| < 1 in the domain space. This means that the argument of \frac{g(z)}{f(z)} has to always lie between \frac{\pi}{2} and \frac{3 \pi }{2} This means that the argument cannot change by 2 \pi so, up to multiplicity, the number of zeros and poles of \frac{g(z)}{f(z)} must be equal. But the poles come from the zeros of the denominator and the zeros come from the zeros of the numerator.

And note: once again, what happens on the boundary of a region (the region bounded by the closed curve) determines what happens INSIDE the curve.

Now let’s what we can do with this. Consider our g(z) = z^8 + z^2 + 1 . Now |z^8 -(z^8 + z^2 + 1)| =|z^2+1| < |z^8| for R = \frac{3}{2} (and actually smaller). This means that z^8 and z^8+z^2 + 1 have the same number of roots inside the circle |z| = \frac{3}{2} : eight roots (counting multiplicity). Now note that |z^8 +z^2 + 1 -1| = |z^8+z^2| < |1| for |z| \leq \frac{2}{3} So z^8 +z^2 + 1 and 1 have the same number of zeros inside the circle |z| = \frac{2}{3} This means that all of the roots of z^8+z^2 + 1 lie in the annulus \frac{2}{3} < z < \frac{3}{2}


Residue integral calculation, Cauchy Principal Value, etc.

First: the Cauchy Principal Value. Remember when you were in calculus and did \int^{\infty}_{-\infty} \frac{dx}{1+x^2} ? If you said something like lim_{b \rightarrow \infty} \int^{b}_{-b} \frac{1}{1+x^2} dx you lost some credit and said that the integral converged if and only if both lim_{a \rightarrow \infty} \int^{a}_{0} \frac{1}{1+x^2} dx AND lim_{b \rightarrow \infty} \int^{0}_{-b} \frac{1}{1+x^2} dx BOTH converged..the limits had to be independent of one another.

That is correct, of course, but IF you knew that both integrals converged then, in fact, lim_{b \rightarrow \infty} \int^{b}_{-b} \frac{1}{1+x^2} dx gave the correct answer.

As for why you need convergence: lim_{b \rightarrow \infty} \int^{b}_{-b} x dx = lim_{b \rightarrow \infty}|^b_{-b} \frac{x^2}{2} =0 so while the integral diverges, this particular limit is zero. So this limit has a name: it is called the Cauchy Principal Value of the integral and is equal to the value of the improper integral PROVIDED the integral converges.

We will use this concept in some of our calculations.

One type of integral: Let f(z) be a function with at most a finite number of isolated singularities, none of which lie on the real axis. Suppose for R large enough, |f(Re^{it})| \leq \frac{|M|}{R^p} for p > 1 . Let H denote the upper half plane (Im(z) > 0 ) Then \int^{\infty}_{-\infty} f(z) dz = 2 \pi i \sum \{ \text{residues in } H\}

The idea: let C represent the contour shown above: upper half of the circle |z| = R followed by the diameter, taken once around in the standard direction. Of course \int_C f(z) dz = 2 \pi i \sum \{ \text{residues enclosed by } C \}  and if R is large enough, the contour encloses all of the singularities in the upper half plane.

But the integral is also equal to the integral along the real axis followed by the integral along the upper half circle. But as far as the integral along the upper half circle (call it \Gamma )

|\int_{\Gamma} f(z) dz | \leq \frac{|M|}{R^p} \pi R = \frac{|M|}{R^{p-1}} \rightarrow 0 as R \rightarrow \infty because p-1 > 0 . The bound comes from the “maximum of the function being integrated times the arc length of the integral”.

So the only non-zero part left is lim_{R \rightarrow \infty} \int^{R}_{-R} f(z) dz which is the Cauchy Principal value, which, IF the integral is convergent, is equal to 2 \pi i \sum \{ \text{residues in } H\} .

Application: \int^{\infty}_{-\infty} \frac{x^2}{x^4+1} dx Note that if f(z) = \frac{z^2}{z^4+1} then |f(Re^it)| \leq \frac{R^2}{R^4-1} which meets our criteria. So the upper half plane singularities are at z_0 = e^{i\frac{\pi}{4}}, z_1 = e^{i\frac{3\pi}{4}} ; both are simple poles. So we can plug these values into \frac{z^2}{4z^3} = \frac{1}{4}z^{-1} (the \frac{h}{g'} formula for simple poles)

So to finish the integral up, the value is 2 \pi i \frac{1}{4}((e^{i\frac{\pi}{4}})^{-1} +(e^{i\frac{3\pi}{4}})^{-1} ) = \frac{1}{2} \pi i (-2i\frac{1}{\sqrt{2}}) = \frac{\pi}{\sqrt{2}}

Now for a more delicate example
This example will feature many concepts, including the Cauchy Principal Value.

We would like to calculate \int^{\infty}_0 \frac{sin(x)}{x} dx with the understanding that lim_{x \rightarrow 0} \frac{sin(x)}{x} = 1 so we assume that \frac{sin(0)}{0} \text{``="} 1 .

First, we should show that \int^{\infty}_0 \frac{sin(x)}{x} dx converges. We can do this via the alternating series test:

\int^{\infty}_0 \frac{sin(x)}{x} dx = \sum^{\infty}_{k=0} \int_{\pi k}^{\pi (k+1)} \frac{sin(x)}{x} dx which forms an alternating series of terms whose magnitudes are decreasing to 0 (we are adding the signed areas bounded by the graph of \frac{sin(x)}{x} and the x axis and note that the signed areas alternate between positive and negative and the absolute values of the areas decrease to zero as |\frac{1}{x}| \geq |\frac{sin(x)}{x} | for x > 0

Above: we have the graphs of Si(x) = \int^x_0 \frac{sin(t)}{t} dt \text{ and } \frac{sin(x)}{x} .

Aside: note that \int^{\infty}_0 |\frac{sin(x)}{x}| dx diverges.
Reason: |  \int^{(k+1) \pi}_{k \pi} |\frac{sin(x)}{x}| dx > |\frac{1}{(k+1) \pi}| \int^{(k+1) \pi}_{k \pi}|sin(x)| dx =|\frac{2}{(k+1) \pi}| and these form the terms of a divergent series.

So we know that our integral converges. So what do we do? Our basic examples assume no poles on the axis. And we do have the trig function.

So here is what turns out to work: use f(z) = \frac{e^{iz}}{z} . Why? For one, along the real axis we have \frac{e^{ix-y}}{x} = \frac{e^{ix}}{x} =\frac{cos(x)}{x} + i \frac{sin(x)}{x} and so what we want will be the imaginary part of our expression..once we figure out how to handle zero AND the real part. Now what about our contour?

Via (someone else’s) cleverness, we use:

We have a small half circle around the origin (upper half plane), the line segment to the larger half circle, the larger half circle in the upper half plane, the segment from the larger half circle to the smaller one. If we call the complete contour \gamma then by Cauchy’s theorem \int_{\gamma} \frac{e^{iz}}{z} dz = 0 no matter what r \text{ and } R are provided 0 < r < R . So the idea will be to let r \rightarrow 0, R \rightarrow \infty and to see what happens.

Let L stand for the larger half circle and we hope that as R \rightarrow \infty we have \int_L \frac{e^{iz}}{z} dz \rightarrow 0

This estimate is a bit trickier than the other estimates.

Reason: if we try |\frac{e^{iRe^{it}}}{Re^{it}}| = |\frac{e^{iRcos(t)}e^{-Rsin(t)}}{Re^{it}} = |\frac{e^{-Rsin(t)}}{R}| which could equal |\frac{1}{R}| when t = 0, t = \pi . Then multiplying this by the length of the curve (\pi R ) we simply get \pi which does NOT go to zero.

So we turn to integration by parts: \int_L \frac{e^{iz}}{z} dz =  and use u = \frac{1}{z}, dv = e^{iz}, du = -\frac{1}{z^2}, v = -ie^{iz} \rightarrow \int_L \frac{e^{iz}}{z} dz= -ie^{iz}\frac{1}{z}|_L - i \int_L \frac{e^{iz}}{z^2} dz

Now as R \rightarrow \infty we have |-ie^{iz}\frac{1}{z}|_L | \leq |\frac{1}{R}| which does go to zero.

And the integral: we now have a z^2 in the denominator instead of merely z hence

|\int_L \frac{e^{iz}}{z^2} dz | \leq |\frac{1}{R^2}| \pi R = \frac{\pi} {R} which also goes to zero.

So we have what we want on the upper semicircle.

The lower semicircle We will NOT get zero as we integrate along the lower semicircle and let the radius shrink to zero. Let’s denote this semicircle by l (and note that we are going in the opposite of the standard direction) so we are really calculating $latex -\int_l \frac{e^{iz}}{z} dz  . This is not an elementary integral. But what we can do is expand \frac{e^{iz}}{z} into its Laurent series centered at zero and obtain:
\frac{e^{iz}}{z} = \frac{1}{z} (1 + (iz) + (iz)^2 \frac{1}{2!} + (iz)^3\frac{1}{3!} + ...) = (\frac{1}{z} + i -z \frac{1}{2} +-z^2 \frac{1}{3!} + ...) = \frac{1}{z} + g(z) where g(z) represents the regular part..the analytic part.

So now -\int_l \frac{e^{iz}}{z} dz = -\int_l \frac{1}{z} dz  -\int_l g(z) dz . Now the second integral goes to zero as r \rightarrow 0 because it is the integral of an analytic function over a smooth curve whose length is shrinking to zero. (or alternately: whose endpoints are getting closer together; we’d have G(-r) - G(r) where G is a primitive of g and -r \rightarrow r .)

The part of the integral that survives is -\int_l \frac{1}{z} dz = -\pi i for ANY r (check this: use z(t) = re^{it}, t \in [0, \pi] .

So adding up the integrals (I am suppressing the integrand for brevity)

\int_L + \int^{-r}_{-R} -\int_l + \int^{R}_r = 0 \rightarrow \int^{-R}_{r} + \int^{R}_r = \int_l = \pi i which is true for r \rightarrow 0, R \rightarrow \infty .

So we have lim_{R \rightarrow \infty, r \rightarrow 0 } \int^{-r}_{-R} \frac{cos(x)}{x} + i \frac{sin(x)}{x} dx +  \int^{R}_{r} \frac{cos(x)}{x} + i \frac{sin(x)}{x} dx =\pi i

Now for the cosine integral: yes, as r \rightarrow 0 this integral diverges. BUT note that for all finite R, r > 0 we have:\int^{-r}_{-R} \frac{cos(x)}{x}  +  \int^{R}_{r} \frac{cos(x)}{x}  dx = 0 as \frac{cos(x)}{x} is an odd function. So the Cauchy Principal Value of \int^{\infty}_{-\infty} \frac{cos(x)}{x} dx = 0

We can now use the imaginary parts and note that the integrals converge (as previously noted) and so:

\int^{\infty}_{-\infty} i\frac{sin(x)}{x} dx = i \pi \rightarrow \int^{\infty}_0 \frac{sin(x)}{x}dx = \frac{\pi}{2}

Residues: integrals, calculation, etc.

Here is the idea: we’d like (relatively) easy ways to integrate functions around simple closed curves. For the purposes of this particular note, we assume the usual about the closed curve (piecewise smooth, once around in the standard direction) and that the function in question does not have singularities along the said closed curve and only a finite number of isolated singularities in the simply connected region enclosed by the curve. If we have no singularities in the said region, then our integral is zero and there is nothing more to do.

Now by the theory of cuts, we can assume that the integral around the given simple closed curve is equal to the sum of the integrals of the function around small circles surrounding each singular point.

So, integrals of this type just boil down to calculating the integrals of the function around these isolated singularities and then just adding.

So lets examine a typical singularity: say z_0 . Let R done the circle of radius R surrounding z_0 and assume that R is small enough that we don’t enclose any other singularity.

Now on a punctured disk (excluding z_0 ) bounded by this circle, we have

f(z) = \sum^{\infty}_{k=1} b_k \frac{1}{(z-z_0)^k} + \sum^{\infty}_{k=0} c_k(z-z_0)^k . Now we have

\int_R f(z)dz = \int_R \sum^{\infty}_{k=1} b_k \frac{1}{(z-z_0)^k} + \sum^{\infty}_{k=0} c_k(z-z_0)^k

= \int_R \sum^{\infty}_{k=1} b_k \frac{1}{(z-z_0)^k} dz + \int_R \sum^{\infty}_{k=0} c_k(z-z_0)^k dz

Now the second integral is the integral of a power series (the regular part of the Laurent series) and therefore the integral is zero since we are integrating along a simple closed curve. So only the integral of the principal part matters.

Now we can interchange integration and summation by the work we did to develop the Laurent series to begin with.

Doing that we obtain:

\int_R f(z)dz =  \sum^{\infty}_{k=1} \int_R b_k \frac{1}{(z-z_0)^k} dz and keep in mind that the b_k are constants. So we have:

\int_R f(z)dz =  \sum^{\infty}_{k=1} b_k \int_R  \frac{1}{(z-z_0)^k} dz and so we need only evaluate

\int_R \frac{1}{(z-z_0)^k} dz to evaluate our integral.

Now for k \in \{2, 3, 4, ... \} we note that \frac{1}{(z-z_0)^k} has a primitive in the punctured disk. OR..if you don’t like that, you can do the substitution z(t) = Re^{it} +z_0, dz = Rie^{it}dt, t \in [0, 2 \pi] and find that these integrals all evaluate to zero.

So only the integral \int_R \frac{1}{z-z_0} dz matters and the aforementioned substitution shows that this integral evaluates to 2 \pi i

So all of this work shows that \int_R f(z) dz = 2 \pi i b_1 ; only the coefficient of the \frac{1}{z-z_0} term matters.

b_1 is called the residue of f at z_0 and sometimes denoted Res(f,z_0)

So now we see that the integral around our original closed curve is just 2 \pi i \sum_k Res(f, z_k) , that is, 2 \pi i times the sum of the residues of f which are enclosed by our closed curve.

This means that it would be good to have “easy” ways to calculate residues of a function.

Now of course, we could always just calculate the Laurent series of the function evaluated at each singularity in question. SOMETIMES, this is easy. And if the singularity in question is an essential one, it is what we have to do.

But sometimes the calculation of the series is tedious and it would be great to have another method.

The idea
Ok, suppose we have a simple pole at z_0 (we’ll build upon this idea). This means that f(z) = b_1\frac{1}{z-z_0} + \sum^{\infty}_{k=0} c_k (z-z_0)^k Now note that (z-z_0)f(z) = b_1 +  \sum^{\infty}_{k=0} c_k (z-z_0)^{k+1} so we obtain:

b_1 = lim_{z \rightarrow z_0} (z-z_0)f(z)

Example: calculate the residue of f(z) = \frac{1}{sin(z)} at z = 0 .

lim_{z \rightarrow 0}z\frac{1}{sin(z)} =lim_{z \rightarrow 0}\frac{z}{sin(z)} =1 .

Now sometimes this is straight forward, and sometimes it isn’t.

Now if f(z) = \frac{1}{g(z)} where g(z) has an isolated zero at z = z_0 (as is the case when we have a pole), we can then write (z-z_0) f(z) = \frac{z-z_0}{g(z)} where both the numerator and denominator functions are going to zero.

This should remind you of L’Hopital’s rule from calculus. So we should derive a version for our use.

Now if both g, h have isolated zeros of finite order at z_0 then

\frac{h(z_0+h)}{g(z_0+h)} =  \frac{h(z_0+h) +h(z_0)}{g(z_0+h) + g(z_0)} (because h(z_0)=g(z_0)=0 )

= \frac{\frac{h(z_0+h) +h(z_0)}{h}}{\frac{g(z_0+h) + g(z_0)}{h}} Now take a limit as h \rightarrow 0 to obtain \frac{h'(z_0)}{g'(z_0)} which can be easier to calculate.

If we apply this technique to \frac{1}{sin(z)} we get \frac{(z)'}{sin(z)'} = \frac{1}{cos(z)} which is 1 at z = 0

Ok, my flight is boarding so I’ll write part II later.

Poles and zeros

We know what a zero of a function is and what a zero of an analytic function is: if f is analytic at point z_0 for which f(z_0) = 0 then f(z) = (z-z_0)^m g(z) where g(z_0) \neq 0 and g is analytic..and this decomposition is unique (via the uniqueness of a power series). m is the order of the zero.

What I did not tell you is that a zero of a NON-CONSTANT analytic function is isolated; that is, there is some r > 0 such that: if A = \{w| |w-z_0| < r \} is the open disk of radius r about z_0 and w \in A then f(w) \neq 0 . That is, a zero of an analytic function can be isolated from other zeros.

Here is why: Suppose not; then there is some sequence of z_k \rightarrow z_0 where f(z_k) = 0 (how to construct: choose a zero z_1 to be closer than 1 to z_0 and then let z_2 be a zero of f that is less than half the distance between z_2 and z_0 and keep repeating this process. It cannot terminate because if it does, it means that the zeros are isolated from z_0 .

Note that for k \neq 0 we have f(z_k) = (z_k-z_0)^m g(z_k) = 0 which implies that g(z_k) = 0 . But g is analytic and since g(z_k) = 0 for all k then g(z_k) \rightarrow g(z_0) = 0 which contradicts the fact that g(z_0) \neq 0 . So, the zeros of an analytic function are isolated.

In fact, we can say even more (that an analytic function is a “discrete mapping”..that is, roughly speaking, takes a discrete set to a discrete set) but we’ll leave that alone, at least for now. See: Discrete Mapping Theorem in Bruce Palka’s complex variables book

But for now, we’ll just stick with this.

Note: in real analysis, there is the concept of a “bump function”: one which is zero on some interval (or region), not zero off of that region and is infinitely smooth (has derivatives of all orders). This cannot happen with analytic complex functions.

Now onto poles.

A singularity of a complex function is a point at which the complex function is not analytic. A singularity is isolated if one can put an open disk around it on which the function is analytic EXCEPT at the singularity.

More formally: say f has an isolated singularity if f is NOT analytic at z_0 but f is analytic on some set 0 < |z-z_0 | < R for some R > 0 . (these sets are referred to as “punctured disks” or “deleted disks”. )

Example: \frac{1}{z} has an isolated singularity at z = 0 . sec(z) has isolated singularities at \frac{pi}{2} \pm k \pi, k \in \{0, 1, 2,...\}

Example: \frac{1}{z^n - 1} has isolated singularities at w_k, k \in \{1, 2, ...n \} where w_k is one of the n-th roots of unity. e^{\frac{1}{z}} has an isolated singularity at z = 0

Example: f(z) = \overline{z} is not analytic anywhere an therefore has no isolated singularities.
Example: f(z) = \frac{1}{sin(\frac{1}{z})} has isolated singularities at z = \frac{1}{\pm k \pi}, k \in \{1, 2, ..\} and has a non-isolated singularity at z = 0 . Log(z) has non isolated singularities at the origin and the negative real axis.

We will deal with isolated singularities.

There are three types:

1. Removable or “inessential”. This is the case where f is technically not analytic at z_0 but lim_{z \rightarrow z_0}f(z) exists. Think of functions like \frac{sin(z)}{z}, \frac{e^z-1}{z}, etc. It is easy to see what the limits are; just write out the power series centered at z = 0 and do the algebra.

What we do here is to say: if lim_{z \rightarrow z_0} f(z) = l then let g(z) = f(z), z \neq z_0, g(z_0) = l and note that g is analytic. So, it does no harm to all but ignore the inessential singularities.

2. Essential singularities: these are weird objects. Here is what I will say: f has an essential singularity at z_0 if z_0 is neither removable nor a pole. Of course, I need to tell you what a pole is.

Why these are weird: if f has an essential singularity at z_0 and A is ANY punctured disk containing z_0 in its center but containing no other singularities, and w is ANY complex number (with at most exception), then there exits z_w \in A where f(z_w) = w . This is startling; this basically means that f maps every punctured disk around an essential singularity to the entire complex plane, possibly minus one point. This is the Great Picard’s Theorem. (we will prove a small version of this; not the full thing).

Example: f(z) = e^{\frac{1}{z}} has an essential singularity at z = 0 . If this seems like an opaque claim, it will be crystal clear when we study Laurent series, which are basically power series, but possibly with terms with negative powers.

3. Poles (what we will spend our time with). If f is not analytic at z_0 but there is some positive integer m such that lim_{z \rightarrow z_0} (z-z_0)^m f(z) exists, then we say that f has pole of order m at z_0 .

“Easy examples”: \frac{1}{z} has a pole of order 1 at the origin (called a “simple pole”). \frac{1}{(z^2+1)(z-2i)^2} has simple poles at \pm i and a pole of order 2 (called a “double pole”) at 2i \frac{sin(z)}{z^3} has a pole of order 2 at the origin. (if that seems like a strange claim, write out the power series for sin(z) then do the division.)

Relation to zeros:

Fact: if f is analytic and has a zero of order m at z_0 , then \frac{1}{f} has a pole of order m at z_0 .

Note: because zeros of analytic functions are isolated, the singularity of \frac{1}{f} is isolated. Now:

\frac{(z-z_0)^m}{(z-z_0)^m f(z)} is analytic at z_0 (no zero in the denominator any longer). Note that m is the smallest integer that works because any smaller integer would leave a zero in the denominator.

Fact: if f has a pole of order m at z_0 then \frac{1}{f} has a zero of order m at z_0 .

Reason: Let g(z) = (z-z_0)^m f(z) be the associated analytic function (with m being the smallest integer that works ). Since this is the smallest m that works, we can assume that g(z_0) \neq 0 (look at the power series for g; if the first term has (z-z_0) to a non zero power, use a lower power of m )

Now (z-z_0)^m \frac{1}{g(z)} = \frac{1}{f(z)} is zero at z = z_0 (denominator is not zero).

So the poles and zeros of an analytic function are very closely linked; they are basically the duals of each other. The calculus intuition of “check for zeros in the denominator” works very well here.

Onward to Laurent series and residues

Start with poles or order m : If f has a pole of order m at z_0 then we know (z-z_0)^m f(z) = g(z) is analytic on some open disk of convergence about z_0 .

So we can write g(z) = a_0 + a_1(z-z_0) +a_2(z-z_0)^2 +....= \sum^{\infty}_{k=0}a_k (z-z_0)^k .

So (z-z_0)^m f(z) = a_0 + a_1(z-z_0) +a_2(z-z_0)^2 +....= \sum^{\infty}_{k=0}a_k (z-z_0)^k .

Now divide both sized by (z-z_0)^m and look at what happens to the series:

f(z) = a_0(z-z_0)^{-m} + a_1(z-z_0)^{-m+1} .....+a_{m-1}(z-z_0)^{-1} + a_{m} + a_{m+1}(z-z_0) + a_{m+2}(z-z_0)^2 + ..

So while f does not have a power series centered at z_0 it does have a series of a sort. Such a series is called a Laurent series. It is traditional to write:

f(z) = \sum^{\infty}_{j=1} b_j (z-z_0)^{-j} + \sum^{\infty}_{k=0} a_k (z-z_0)^k . Of course, for a pole of order m , at most m terms of the first series will have non-zero coefficients. For an essential singularity, an infinite number of the coefficients will be non-zero. We will see that the first series yields a function that is analytic for \{ w | |w-z_0| > r \} for some r >0 and the second series, a power series, is analytic within some open disk of convergence (as usual).

Terminology: \sum^{\infty}_{j=1} b_j (z-z_0)^{-j} is called the principal part of the Laurent series, and \sum^{\infty}_{k=0} a_k (z-z_0)^k is called the regular part.

b_1 (the coefficient of the \frac{1}{z-z_0} is called the residue of f at z_0

Why this is important: Let \gamma be a small circle that encloses z_0 but no other singularities. Then \int_{\gamma} f(z)dz = 2 \pi i b_1 = 2 \pi i Res(f, z_0) . This is the famous Residue Theorem and this arises from simple term by term integration of the Laurent series. For a pole it is easy: the integral of the regular part is zero since the regular part is an analytic function, so we need only integrate around the terms with negative powers and there are only a finite number of these.

Each b_k \frac{1}{(z-z_0)^k} has a primitive EXCEPT for b_1 \frac{1}{z-z_0} so each of these integrals are zero as well. So ONLY THE b_1 term matters, with regards to integrating around a closed loop!

The proof is not quite as straight forward if the singularity is essential, though the result still holds. For example:

e^{\frac{1}{z}} = 1 + \frac{1!}{z} + \frac{1}{2!z^2} + \frac{1}{3!z^3} + ..+ \frac{1}{k!z^k} +... but the result still holds; we just have to be a bit more careful about justifying term by term integration.

So, if f is at all reasonable (only isolated singularities), then integrating f along a closed curve amounts to finding the residues within the curve (and having the curve avoid the singularities, of course), adding them up and multiplying by 2 \pi i . Note: this ONLY applies to f with isolated singularities; for other functions (say f(z) = \overline{z} ) we have to grit our teeth a parameterize the curve the old fashioned way.

Now FINDING those residues can be easy, or at times, difficult. Stay tuned.

Fresnel Integrals

In this case, we want to calculate \int^{\infty}_0 cos(x^2) dx and \int^{\infty}_0 sin(x^2) dx . Like the previous example, we will integrate a specific function along a made up curve. Also, we will want one of the integrals to be zero. But unlike the previous example: we will be integrating an analytic function and so the integral along the simple closed curve will be zero. We want one leg of the integral to be zero though.

The function we integrate is f(z) = e^{iz^2} . Now along the real axis, y = 0 and so e^{i(x+iy)^2} =e^{ix^2} = cos(x^2) + isin(x^2) and dz = dx So the integral along the positive real axis will be the integrals we want with \int^{\infty}_0 cos(x^2) dx being the real part and \int^{\infty}_0 sin(x^2) dx being the imaginary part.

So here is the contour

Now look at the top wedge: z = te^{i \frac{\pi}{4}}, t \in [0,R] (taken in the “negative direction”)

So z^2 = t^2 e^{i \frac{\pi}{2}} =t^2 (0 + isin(\frac{\pi}{2}) = it^2 \rightarrow e^{iz^2} = e^{i^2t^2} = e^{-t^2}

We still need dz = cos(\frac{\pi}{4}) + isin(\frac{\pi}{4}) dt = \frac{\sqrt{2}}{2}(1+i)dt

So the integral along this line becomes - \frac{\sqrt{2}}{2}(1+i)\int^{R}_0 e^{-t^2} dt = -\frac{\sqrt{\pi}}{2}\frac{\sqrt{2}}{2}(1+i) as R \rightarrow \infty . Now IF we can show that, as R \rightarrow \infty the integral along the circular arc goes to zero, we have:

\frac{\sqrt{\pi}}{2}\frac{\sqrt{2}}{2}(1+i) = \int^{\infty}_0 cos(x^2) dx + i  \int^{\infty}_0 sin(x^2) dx . Now equate real and imaginary parts to obtain:

\int^{\infty}_0 cos(x^2) dx = \frac{\sqrt{\pi}}{2}\frac{\sqrt{2}}{2} = \frac{\sqrt{2 \pi}}{4} and \int^{\infty}_0 sin(x^2) dx =\frac{\sqrt{\pi}}{2}\frac{\sqrt{2}}{2} = \frac{\sqrt{2 \pi}}{4}

So let’s set out to do just that: here z = Re^{it}, t \in [0, \frac{\pi}{4}] so e^{iz^2} = e^{iR^2e^{2it}}e^{iR^2(cos(2t) + isin(2t)} = e^{iR^2(cos(2t)}e^{-R^2sin(2t)} . We now have dz = iRe^{it} dt so now |\int^{\frac{\pi}{4}}_0 e^{iR^2(cos(2t)}e^{-R^2sin(2t)}iRe^{it} dt| \leq \int^{\frac{\pi}{4}}_0|e^{iR^2(cos(2t)} || e^{-R^2sin(2t)} |iRe^{it}| dt =

\int^{\frac{\pi}{4}}_0| e^{-R^2sin(2t)} R dt

Now note: for t \in [0, \frac{\pi}{4}] we have sin(2t) \geq \frac{2}{\pi}t

\rightarrow e^{-R^2 sin(2t)} \leq e^{-R^2\frac{2}{\pi}t} hence

\int^{\frac{\pi}{4}}_0 e^{-R^2sin(2t)} R dt \leq \int^{\frac{\pi}{4}}_0 e^{-R^2frac{2}{\pi}t} R dt = -\frac{1}{R}\frac{\pi}{2}(1-e^{-R^2\frac{1}{2}}) and this goes to zero as R goes to infinity.

Some real integral calculation

Note to other readers: if you know what a “residue integral” is, this post is too elementary for you.

Recall Cauchy’s Theorem (which we proved in class): if f is analytic on a simply connected open set A and \gamma is some piecewise smooth simple closed curve in A and z_0 is in the region enclose by \gamma then f(z_0) =\frac{1}{2\pi i} \int_{\gamma} \frac{f(w)}{w-z_0} dw

This is somewhat startling that the integral a related function along the boundary curve of a region determines the value of the function in that said region.

And, this fact, plus this fact: |\int_{\gamma} f(w)dw | \leq M l(\gamma) where l(\gamma) is the length of the curve \gamma and M = max \{|f(w)|, w \in \gamma \} can lead to the solution to integrals of real variable functions.

Here is one collection of examples: let’s try to calculate \int^{\infty}_{-\infty} \frac{dx}{x^{2a} + 1} a \in \{1,2,3,... \}

Now consider the curve \gamma which runs from -R to R along the real axis and then from R to -R along the “top semicircle” of |z| = R (positive imaginary part). Denote that curve by C_r See the following figure:

So if we attempt to integrate \frac{1}{z^{2a} + 1} along this contour we get \int_{-R}^{R} \frac{1}{x^{2a} + 1}dx + \int_{C_r} \frac{1}{z^{2a} + 1} dz

Now as we take a limit as R \rightarrow \infty the first integral becomes the integral we wish to evaluate. The second integral: remember that |z| constant along that curve and we know that |z^{2a} + 1| \geq |z^{2a}| - 1 =R^{2a} -1 along this curve, hence \frac{1}{z^{2a} + 1} \leq \frac{1}{R^{2a}-1} along C_r (assuming R > 1 )

So |\int_{C_r} \frac{1}{z^{2a} + 1} dz| \leq \frac{1}{R^{2a} -1} R \pi because l(C_r) = \pi R (think: magnitude of the integrand times the arc length).

Now lim_{R \rightarrow \infty} \frac{1}{R^{2a} -1} R \pi =0 (provided, of course, that a \in \{1, 2,...\} . So as R goes to infinity, the integral around the entire curve becomes the integral along the real axis, which is the integral that we are attempting to calculate. Note that because 2a is even, \frac{1}{x^{2a} + 1} is continuous on the whole real line.

This, of course, does not tell us what \int^{\infty}_{-\infty} \frac{1}{x^{2a} + 1} dx is but we can use Cauchy’s theorem to calculate the integral around the whole curve, which is equal to the integral along the entire real axis.

So, in order to calculate the integral along the curve, we have to deal with where \frac{1}{z^{2a} + 1} is NOT analytic. This means finding the roots of the denominator: z^{2a} + 1 that lie in the upper half plane (and are therefore contained within the curve when R is large enough). There will be a of these in the upper half plane.

Label these roots w_1, w_2,...w_a . Now draw small circles C_1, C_2, ..C_a around each of these ..the circles are small enough to contain exactly ONE w_k . Within each of the circles, \frac{1}{z^{2a}+1 } is analytic EXCEPT at that said root.

Now here is the key: for each root w_k , write z^{2a} + 1 = (z-w_k)(p_k(z) ) where p_k(z) = \frac{z^{2a} + 1}{z-w_k} Then for all k , \int_{C_k} \frac{1}{z^{2a} + 1} dz = \int_{C_k} \frac{\frac{1}{p_k(z)}}{z-w_k }dz = 2 \pi i \frac{1}{p_k(w_k)} by Cauchy’s Theorem (\frac{1}{p_k(z)} is analytic within C_k as we divided out the root within that region).

Now by using the method of cuts, the integral around the large curve \gamma is just the sum of the integrals along the smaller circles around the roots. This figure is the one for \frac{1}{z^4 + 1} .

So, putting it all together:

\int^{\infty}_{-\infty} \frac{1}{x^{2a} + 1} dx = (2 \pi i)(\frac{1}{p_1(w_1)} + \frac{1}{p_2(w_2)} +...+ \frac{1}{p_a(w_a)})

And YES, the i always cancels out so we do get a real valued answer.

I admit that calculation of p_k(w_k) can get a bit tedious but conceptually it isn’t hard.

Let’s do this for 2a= 2 and again for 2a = 4.

For \int^{\infty}_{-\infty} \frac{1}{x^2+1} dx  note that p_1(z) = \frac{z^2+1}{z-i} = z+i and so the integral is 2 \pi i (\frac{1}{i+i}) = 2 \pi \frac{1}{2} = \pi as expected (you can do this one with calculus techniques; use \int \frac{1}{x^2+1} dx = arctan(x) + C )

Now for \int^{\infty}_{-\infty} \frac{1}{x^4+1} dx

Label the roots w_1 = \frac{\sqrt{2}}{2}(1+i), w_2 = \frac{\sqrt{2}}{2}(-1+i), w_3 = \frac{\sqrt{2}}{2}(-1-i), w_4 = \frac{\sqrt{2}}{2}(1-i)

So z^4+1 = (z-w_1)(z-w_2)(z-w_3)(z-w_4) \rightarrow

p_1(w_1) = (w_1-w_2)(w_1-w_3)(w_1-w_4), p_2(w_2) = (w_2-w_1)(w_2-w_3)(w_2-w_4)

So \int^{\infty}_{-\infty} \frac{1}{x^4+1} dx = 2 \pi i(\frac{1}{p_1(w_1)} + \frac{1}{p_2(w_2)}) =

2 \pi i (\frac{1}{(\sqrt{2})^3})(\frac{1}{(1)(1+i)(i)} + \frac{1}{(-1)(i)(-1+i)} = \frac{\pi}{\sqrt{2}} i \frac{1}{i}(\frac{1}{1+i} - \frac{1}{-1+i})

= \frac{\pi}{\sqrt{2}}\frac{-1+i -( 1+i)}{(1+i)(-1+i)} = \frac{\pi}{\sqrt{2}}\frac{-2}{-2} = \frac{\pi}{\sqrt{2}}

A summary of some integral theorems

This post will contain no proofs but rather, statements of theorems that we will use. Note: all curves, unless stated otherwise, will be piecewise smooth and taken in the standard direction.

1. Given f analytic on some open domain D and \gamma a simple closed curve in D whose bounded region is also in D . Then \int_{\gamma}f(w)dw = 0

Note: the curve in question is a simple closed curve whose bounded region is in a domain where f is analytic.

So, we note that f(w) = \frac{1}{w} is analytic in the open annulus A= \{z| 1 < |z| < 3 \} and the curve |z|=2 lies in A but \int_{|z|=2} \frac{1}{w} dw = 2 \pi i \neq 0 . The reason this does not violate this result is that the region bounded by the curve is \{z| |z|  < 2 \} is NOT contained in A .

2. If f is analytic within a simply connected open domain D and \gamma is a closed curve (not necessarily a simple closed curve; \gamma might have self intersections). then \int_{\gamma} f(w)dw = 0 . Note that this result follows from a careful application of 1. This also shows that \gamma, \alpha two paths connecting, say, w_0 to z_0 in D then \int_{\gamma}f(w)dw = \int_{\alpha} f(w) dw . That is, the integrals are path independent.

Why this is useful: suppose f is NOT analytic at, say, w_0 but is analytic everywhere else in some open domain D which contains w_0 . Now let \gamma, \alpha be two disjoint simple closed curves whose bounded regions contain w_0 . Then \int_{\gamma} f(w)dw = \int_{\alpha} f(w) dw even though the integrals might not be zero.

The curve formed by connecting \gamma to \alpha by a cut line (in green..and going backwards on \alpha is NOT a simple closed curve, but it is a piecewise smooth closed curve which bounds a simply connected region which excludes the point where f is not analytic; hence the integral along this curve IS zero. So by subtracting off the integral along the cut lines (you go in opposite directions) yields the equality of the integrals.

3. If f is analytic on a simply connected domain, then f has a primitive there (aka: “anti derivative”). That is, there is some F where F' =f on that said domain.

Note: you need “simply connected” here as \frac{1}{z} is analytic on C - \{0\} but has no primitive ON C - \{0\} .
But \frac{1}{z} does have a primitive on, say, \{z| Im(z) > 0 \} (Log(z) is one such primitive)

4. If f has a primitive on an open domain and \gamma is a closed curve on that domain, then \int_{\gamma} f(w)dw = 0
this follows from our “evaluation of an integral by a primitive” theorem. And note: the domain does NOT have to be simply connected.

Example: \frac{1}{z^2} has a primitive on C - \{0\} so if \gamma is any closed curve that does not run through the origin, \int_{\gamma} \frac{1}{z^2} dz = 0 . But this does NOT work for \frac{1}{z} as the candidate for a primitive is a branch of the log function, which must have a discontinuities on some infinite ray (possibly not straight) whose endpoint is on the origin.