Application of complex analysis to numerical analysis

I found this to be interesting.

For those who do not know: numerical analysis is the mathematics of finding approximate solutions to problems that do not have “closed form” (e. g. formula) solutions. Example: find a suitable approximation for, say, \int_0^1 sin(x^2) dx .

This short article provides an introduction to an application:

Advertisements

Final remark: conformal mappings

Imagine two smooth curves in the complex plane: z_1(t), z_2(t) where z_1(t_0) = z_2(t_0) = z_0 . The angle between the curves is determined by the angle between the tangent vectors z'_1(t_0), z'_2(t_0) at z_0 . The angle between the vectors can be thought of as arg(z'_1(t_0)) -arg(z'_2(t_0)) since the curves meet at t_0

Now let f be analytic at z_0 and let us focus on just a small part of the two cures in question: let w_1(t)  = f(z_1(t)), w_2(t) = f(z_2(t)) . These will be smooth curves as well, albeit in the domain space.

We have the chain rule and so w'_1(t) = f'(z_1(t))z'_1(t) and w'_2(t) = f'(z_2(t))z'_2(t)

Now look at the arguments: arg(w'_1(t)) = arg(f'(z_1(t)) + arg(z'_1(t)) and arg(w'_2(t)) = arg(f'(z_2(t)) + arg(z'_2(t)) PROVIDED f'(z_1(t_0))= f'(z_2(t_0)) \neq 0 Now calculate arg(w'_1(t)) -arg(w'_2(t)) = arg(z'_1(t)) - arg(z'_2(t)) which is the angle between the original curves.

Any function that preserves angles in this manner is called conformal. So we showed that, if f is analytic on an open disk and f' is not zero on the disk, then f is conformal. Furthermore, our previous work shows that if f is analytic and one-to-one on a disk, f' never zero on the disk. So functions which are analytic and one-to-one (at least locally) are conformal (at least locally).

Examples: find where e^z, sin(z), cos(z), \sum_{k=0}^n a_k z^k are conformal. What about f(z) = \frac{a + bz}{c+dz} ?

Related exercise: where is f(z) = \frac{a + bz}{c+dz} analytic and one to one?

Maximum modulus principle

First: the last assignment of the year, due Monday: 3.1 2, 6, 14, 20. I hope we can finish 3.1, then do 3.2 before the semester ends.

Now to the new stuff.

Let’s think back to calculus: suppose we wanted to find the maximum of, say, f(x) = 1-x^2 on [-1,1] . You’d check the end points to find f(-1) = f(1) =0 and you’d also find f'(x) = -2x =0 \text{ when } x = 0 so the maximum is (0, f(0)) = (0,1) . Also note that f takes the open interval (-1, 1) to the half open interval (0,1] .

That is, the image of an open set is NOT an open set, even though f is as differentiable as we could hope for.

The situation is different for non-constant analytic functions.

Let f be analytic and non constant on some connected open set D . Let z_0 \in D

Now the function f(z) - f(z_0) is not identically zero but is analytic and has a zero on D ; say it is of order m .

Zeros of analytic functions are isolated so one can find some r > 0 such that f(z) - f(z_0) has no zero for r >|z-z_0| > 0 . So let \delta = min\{ |f(z) - f(z_0) | |z-z_0| = r \} Now choose w, |w-f(z_0)| < \delta .

Then on the circle |z-z_0| = r we have |(f(z) -w) - (f(z) - f(z_0))| = |w-f(z_0)|  |f(z)| for all |z-z_0| < r then f(z_0) lies on the boundary of the open set \{f(z): |z-z_0| < r \} . This contradicts that f(z_0) lies in an open set in f(D) .

Here is an analytic proof: suppose f(z_0) is not on the boundary. Then we can find \epsilon > 0 where |w-f(z_0)| > 0 and w = f(z_1) and then |f(z_1) | > |f(z_0) | .

This leads to Schwarz’s Lemma: if f is analytic on |z| < 1 , f(0) = 0 and |f(z)| \leq 1 for all z in the disk, then, for |z| < 1 we have |f(z)| \leq |z| .

Here is why: set g(z) =\frac{f(z)}{z} and note that g is analytic on |z| < 1 (removable singularity at 0 ). And we have for all z in the disk: if |z| = r, |g(z)| \leq \frac{1}{r} which means that |g(z)| < \frac{1}{r} for all z inside the circle |z| = r as well. Now let r \rightarrow 1 to get the result.

Note also: if g(z_0) = 1 at some point in the unit disk; we see that we have an interior maximum which means that g is constant, hence g(z) = \frac{f(z)}{z} = \lambda \rightarrow f(z) = \lambda z where |\lambda| = 1

Onward to the argument principle and Rouche’s Theorem

Consider the function f(z) = z^3 and look at what it does to values on the circle z(t) = e^{it}, t \in [0, 2\pi) . f(e^{it}) = e^{i3t} and as we go around the circle once, we see that arg(f(z)) = arg(e^{i3t}) ranges from 0 to 6 \pi . Note also that f has a zero of order 3 at the origin and (2 \pi)3 = 6 \pi That is NOT a coincidence.

Now consider the function g(z) = \frac{1}{z^3} and look at what it does to values on the circle z(t) = e^{it}, t \in [0, 2\pi) . g(e^{it}) = e^{-i3t} and as we go around the circle once, we see that arg(g(z)) = arg(e^{-i3t}) ranges from 0 to -6 \pi . And here, g has a pole of order 3 at the origin. This too, is not a coincidence.

We can formalize this somewhat: in the first case, suppose we let \gamma be the unit circle taken once around in the standard direction and let’s calculate:

\int_{\gamma} \frac{f'(z)}{f(z)} dz = \int_{\gamma}\frac{3z^2}{z^3}dz = 3 \int_{\gamma} \frac{1}{z}dz = 6 \pi i

In the second case: \int_{\gamma} \frac{g'(z)}{g(z)} dz = \int_{\gamma}\frac{-3z^{-4}}{z^{-3}}dz = -3 \int_{\gamma} \frac{1}{z}dz = -6 \pi i

Here is what is going on: you might have been tempted to think \int_{\gamma} \frac{f'(z)}{f(z)} dz = Log(f(z))|_{\gamma} = (ln|f(z)| +iArg(f(z)) )|_{\gamma} and this almost works; remember that Arg(z) switches values abruptly along a ray from the origin that follows the negative real axis..and any version of the argument function must do so from SOME ray from the origin. The real part of the integral (the ln(|f(z)| part) cancels out when one goes around the curve. The argument part (the imaginary part) does not; in fact we pick up a value of 2 \pi i every time we cross that given ray from the origin and in the case of f(z) = z^3 we cross that ray 3 times.

This is the argument principle in action.

Now of course, this principle can work in the vicinity of any isolated singularity or zero or along a curve that avoids singularities and zeros but encloses a finite number of them. The mathematical machinery we develop will help us with this concept.

So, let’s suppose that f has a zero of order m at z = z_0 . This means that f(z) = (z-z_0)^m g(z) where g(z_0) \neq 0 and g is analytic on some open disk about z_0 .

Now calculate: \frac{f'(z)}{f(z)} dz = \frac{m(z-z_0)^{m-1} g(z) - (z-z_0)^m g'(z)}{(z-z_0)^m g(z)} = \frac{m}{z-z_0} + \frac{g'(z)}{g(z)} . Now note that the second term of the sum is an analytic function; hence the Laurent series for \frac{f'(z)}{f(z)} has \frac{m}{z-z_0} as its principal part; hence Res(\frac{f'(z)}{f(z)}, z_0) = m

Now suppose that f has a pole of order m at z_0 . Then f(z) =\frac{1}{h(z)} where h(z) has a zero of order m . So as before write f(z) = \frac{1}{(z-z_0)^m g(z)} = (z-z_0)^{-m}(g(z))^{-1} where g is analytic and g(z_0) \neq 0 . Now f'(z) = -m(z-z_0)^{-m-1}g(z) -(g(z))^{-2}g'(z)(z-z_0)^{-m} and
\frac{f'(z)}{f(z)} =\frac{-m}{z-z_0} -  \frac{g'(z)}{g(z)} where the second term is an analytic function. So Res(\frac{f'(z)}{f(z)}, z_0) = -m

This leads to the following result: let f be analytic on some open set containing a piecewise smooth simple closed curve \gamma and analytic on the region bounded by the curve as well, except for a finite number of poles. Also suppose that f has no zeros on the curve.

Then \int_{\gamma} \frac{f'(z)}{f(z)} dz = 2 \pi i (\sum^k_{j =1} m_k - \sum^l_{j = 1}n_j ) where m_1, m_2...m_k are the orders of the k zeros of f inside of \gamma and n_1, n_2.., n_l are the orders of the poles of f inside \gamma .

This follows directly from the theory of cuts:

Use of our result: let f(z) = \frac{(z-i)^4(z+2i)^3}{z^2 (z+3i-4)} and let \Gamma be a circle of radius 10 (large enough to enclose all poles and zeros of f . Then \int_{\Gamma} \frac{f'(z)}{f(z)} dz = 2 \pi i (4 + 3 -2-1) = 8 \pi i . Now if \gamma is a circle |z| = 3 we see that \gamma encloses the zeros at i, -2i, and the pole at 0 but not the pole at 4-3i so \int_{\Gamma} \frac{f'(z)}{f(z)} dz = 2 \pi i (4+3 -2) = 10 \pi i

Now this NOT the main use of this result; the main use is to describe the argument principle and to get to Rouche’s Theorem which, in turn, can be used to deduce facts about the zeros of an analytic function.

Argument principle: our discussion about integrating \frac{f'(z)}{f(z)} around a closed curve (assuming that the said curve runs through no zeros of f and encloses a finite number of poles ) shows that, as we traverse the curve, the argument of the function changes by 2 \pi (\text{ no. of zeros - no. of poles}) where the zeros and poles are counted with multiplicities.

Example: consider the function f(z) = z^8 + z^2 + 1 . Let’s find how many zeros it has in the first quadrant.

If we consider the quarter circle of very large radius R (that stays in the first quadrant and is large enough to enclose all first quadrant zeros) and note f(Re^{it})  = R^8e^{i8t}(1+ \frac{1}{R^6}e^{-i6t} + \frac{1}{R^8 e^{i8t}}) we see that the argument changes by about 8(\frac{\pi}{2} = 4 \pi . The function has no roots along the positive real axis. Now setting z = iy to run along the positive imaginary axis we get f(iy) = y^8 -y^2 + 1 which is positive for large R , has one relative minimum at 2^{\frac{-1}{3}}  which yields a positive number, and is zero at z = 0 . So the argument stays at 4 \pi so, we get 4 \pi = 2 \pi (\text{no. of roots in the first quadrant}) which means that we have 2 roots in the first quadrant.

In fact, you can find an online calculator which estimates them here.

Now for Rouche’s Theorem
Here is Rouche’s Theorem: let f, g be analytic on some piecewise smooth closed curve C and on the region that C encloses. Suppose that, on C we have |f(z) + g(z)| < |f(z)| . Then f, g have the same number of zeros inside C . Note: the inequality precludes f from having a zero on C and we can assume that f, g have no common zeros, for if they do, we can “cancel them out” by, say, writing f(z) = (z-z_0)^m f_1(z), g(z) = (z-z_0)^mg_1(z) at the common zeros. So now, on C we have |1 + \frac{g(z)}{f(z)}| < |1| which means that the values of the new function \frac{g(z)}{f(z)} lie within the circle |w+1| < 1 in the domain space. This means that the argument of \frac{g(z)}{f(z)} has to always lie between \frac{\pi}{2} and \frac{3 \pi }{2} This means that the argument cannot change by 2 \pi so, up to multiplicity, the number of zeros and poles of \frac{g(z)}{f(z)} must be equal. But the poles come from the zeros of the denominator and the zeros come from the zeros of the numerator.

And note: once again, what happens on the boundary of a region (the region bounded by the closed curve) determines what happens INSIDE the curve.

Now let’s what we can do with this. Consider our g(z) = z^8 + z^2 + 1 . Now |z^8 -(z^8 + z^2 + 1)| =|z^2+1| < |z^8| for R = \frac{3}{2} (and actually smaller). This means that z^8 and z^8+z^2 + 1 have the same number of roots inside the circle |z| = \frac{3}{2} : eight roots (counting multiplicity). Now note that |z^8 +z^2 + 1 -1| = |z^8+z^2| < |1| for |z| \leq \frac{2}{3} So z^8 +z^2 + 1 and 1 have the same number of zeros inside the circle |z| = \frac{2}{3} This means that all of the roots of z^8+z^2 + 1 lie in the annulus \frac{2}{3} < z < \frac{3}{2}

Residue integral calculation, Cauchy Principal Value, etc.

First: the Cauchy Principal Value. Remember when you were in calculus and did \int^{\infty}_{-\infty} \frac{dx}{1+x^2} ? If you said something like lim_{b \rightarrow \infty} \int^{b}_{-b} \frac{1}{1+x^2} dx you lost some credit and said that the integral converged if and only if both lim_{a \rightarrow \infty} \int^{a}_{0} \frac{1}{1+x^2} dx AND lim_{b \rightarrow \infty} \int^{0}_{-b} \frac{1}{1+x^2} dx BOTH converged..the limits had to be independent of one another.

That is correct, of course, but IF you knew that both integrals converged then, in fact, lim_{b \rightarrow \infty} \int^{b}_{-b} \frac{1}{1+x^2} dx gave the correct answer.

As for why you need convergence: lim_{b \rightarrow \infty} \int^{b}_{-b} x dx = lim_{b \rightarrow \infty}|^b_{-b} \frac{x^2}{2} =0 so while the integral diverges, this particular limit is zero. So this limit has a name: it is called the Cauchy Principal Value of the integral and is equal to the value of the improper integral PROVIDED the integral converges.

We will use this concept in some of our calculations.

One type of integral: Let f(z) be a function with at most a finite number of isolated singularities, none of which lie on the real axis. Suppose for R large enough, |f(Re^{it})| \leq \frac{|M|}{R^p} for p > 1 . Let H denote the upper half plane (Im(z) > 0 ) Then \int^{\infty}_{-\infty} f(z) dz = 2 \pi i \sum \{ \text{residues in } H\}

The idea: let C represent the contour shown above: upper half of the circle |z| = R followed by the diameter, taken once around in the standard direction. Of course \int_C f(z) dz = 2 \pi i \sum \{ \text{residues enclosed by } C \}  and if R is large enough, the contour encloses all of the singularities in the upper half plane.

But the integral is also equal to the integral along the real axis followed by the integral along the upper half circle. But as far as the integral along the upper half circle (call it \Gamma )

|\int_{\Gamma} f(z) dz | \leq \frac{|M|}{R^p} \pi R = \frac{|M|}{R^{p-1}} \rightarrow 0 as R \rightarrow \infty because p-1 > 0 . The bound comes from the “maximum of the function being integrated times the arc length of the integral”.

So the only non-zero part left is lim_{R \rightarrow \infty} \int^{R}_{-R} f(z) dz which is the Cauchy Principal value, which, IF the integral is convergent, is equal to 2 \pi i \sum \{ \text{residues in } H\} .

Application: \int^{\infty}_{-\infty} \frac{x^2}{x^4+1} dx Note that if f(z) = \frac{z^2}{z^4+1} then |f(Re^it)| \leq \frac{R^2}{R^4-1} which meets our criteria. So the upper half plane singularities are at z_0 = e^{i\frac{\pi}{4}}, z_1 = e^{i\frac{3\pi}{4}} ; both are simple poles. So we can plug these values into \frac{z^2}{4z^3} = \frac{1}{4}z^{-1} (the \frac{h}{g'} formula for simple poles)

So to finish the integral up, the value is 2 \pi i \frac{1}{4}((e^{i\frac{\pi}{4}})^{-1} +(e^{i\frac{3\pi}{4}})^{-1} ) = \frac{1}{2} \pi i (-2i\frac{1}{\sqrt{2}}) = \frac{\pi}{\sqrt{2}}

Now for a more delicate example
This example will feature many concepts, including the Cauchy Principal Value.

We would like to calculate \int^{\infty}_0 \frac{sin(x)}{x} dx with the understanding that lim_{x \rightarrow 0} \frac{sin(x)}{x} = 1 so we assume that \frac{sin(0)}{0} \text{``="} 1 .

First, we should show that \int^{\infty}_0 \frac{sin(x)}{x} dx converges. We can do this via the alternating series test:

\int^{\infty}_0 \frac{sin(x)}{x} dx = \sum^{\infty}_{k=0} \int_{\pi k}^{\pi (k+1)} \frac{sin(x)}{x} dx which forms an alternating series of terms whose magnitudes are decreasing to 0 (we are adding the signed areas bounded by the graph of \frac{sin(x)}{x} and the x axis and note that the signed areas alternate between positive and negative and the absolute values of the areas decrease to zero as |\frac{1}{x}| \geq |\frac{sin(x)}{x} | for x > 0

Above: we have the graphs of Si(x) = \int^x_0 \frac{sin(t)}{t} dt \text{ and } \frac{sin(x)}{x} .

Aside: note that \int^{\infty}_0 |\frac{sin(x)}{x}| dx diverges.
Reason: |  \int^{(k+1) \pi}_{k \pi} |\frac{sin(x)}{x}| dx > |\frac{1}{(k+1) \pi}| \int^{(k+1) \pi}_{k \pi}|sin(x)| dx =|\frac{2}{(k+1) \pi}| and these form the terms of a divergent series.

So we know that our integral converges. So what do we do? Our basic examples assume no poles on the axis. And we do have the trig function.

So here is what turns out to work: use f(z) = \frac{e^{iz}}{z} . Why? For one, along the real axis we have \frac{e^{ix-y}}{x} = \frac{e^{ix}}{x} =\frac{cos(x)}{x} + i \frac{sin(x)}{x} and so what we want will be the imaginary part of our expression..once we figure out how to handle zero AND the real part. Now what about our contour?

Via (someone else’s) cleverness, we use:

We have a small half circle around the origin (upper half plane), the line segment to the larger half circle, the larger half circle in the upper half plane, the segment from the larger half circle to the smaller one. If we call the complete contour \gamma then by Cauchy’s theorem \int_{\gamma} \frac{e^{iz}}{z} dz = 0 no matter what r \text{ and } R are provided 0 < r < R . So the idea will be to let r \rightarrow 0, R \rightarrow \infty and to see what happens.

Let L stand for the larger half circle and we hope that as R \rightarrow \infty we have \int_L \frac{e^{iz}}{z} dz \rightarrow 0

This estimate is a bit trickier than the other estimates.

Reason: if we try |\frac{e^{iRe^{it}}}{Re^{it}}| = |\frac{e^{iRcos(t)}e^{-Rsin(t)}}{Re^{it}} = |\frac{e^{-Rsin(t)}}{R}| which could equal |\frac{1}{R}| when t = 0, t = \pi . Then multiplying this by the length of the curve (\pi R ) we simply get \pi which does NOT go to zero.

So we turn to integration by parts: \int_L \frac{e^{iz}}{z} dz =  and use u = \frac{1}{z}, dv = e^{iz}, du = -\frac{1}{z^2}, v = -ie^{iz} \rightarrow \int_L \frac{e^{iz}}{z} dz= -ie^{iz}\frac{1}{z}|_L - i \int_L \frac{e^{iz}}{z^2} dz

Now as R \rightarrow \infty we have |-ie^{iz}\frac{1}{z}|_L | \leq |\frac{1}{R}| which does go to zero.

And the integral: we now have a z^2 in the denominator instead of merely z hence

|\int_L \frac{e^{iz}}{z^2} dz | \leq |\frac{1}{R^2}| \pi R = \frac{\pi} {R} which also goes to zero.

So we have what we want on the upper semicircle.

The lower semicircle We will NOT get zero as we integrate along the lower semicircle and let the radius shrink to zero. Let’s denote this semicircle by l (and note that we are going in the opposite of the standard direction) so we are really calculating $latex -\int_l \frac{e^{iz}}{z} dz  . This is not an elementary integral. But what we can do is expand \frac{e^{iz}}{z} into its Laurent series centered at zero and obtain:
\frac{e^{iz}}{z} = \frac{1}{z} (1 + (iz) + (iz)^2 \frac{1}{2!} + (iz)^3\frac{1}{3!} + ...) = (\frac{1}{z} + i -z \frac{1}{2} +-z^2 \frac{1}{3!} + ...) = \frac{1}{z} + g(z) where g(z) represents the regular part..the analytic part.

So now -\int_l \frac{e^{iz}}{z} dz = -\int_l \frac{1}{z} dz  -\int_l g(z) dz . Now the second integral goes to zero as r \rightarrow 0 because it is the integral of an analytic function over a smooth curve whose length is shrinking to zero. (or alternately: whose endpoints are getting closer together; we’d have G(-r) - G(r) where G is a primitive of g and -r \rightarrow r .)

The part of the integral that survives is -\int_l \frac{1}{z} dz = -\pi i for ANY r (check this: use z(t) = re^{it}, t \in [0, \pi] .

So adding up the integrals (I am suppressing the integrand for brevity)

\int_L + \int^{-r}_{-R} -\int_l + \int^{R}_r = 0 \rightarrow \int^{-R}_{r} + \int^{R}_r = \int_l = \pi i which is true for r \rightarrow 0, R \rightarrow \infty .

So we have lim_{R \rightarrow \infty, r \rightarrow 0 } \int^{-r}_{-R} \frac{cos(x)}{x} + i \frac{sin(x)}{x} dx +  \int^{R}_{r} \frac{cos(x)}{x} + i \frac{sin(x)}{x} dx =\pi i

Now for the cosine integral: yes, as r \rightarrow 0 this integral diverges. BUT note that for all finite R, r > 0 we have:\int^{-r}_{-R} \frac{cos(x)}{x}  +  \int^{R}_{r} \frac{cos(x)}{x}  dx = 0 as \frac{cos(x)}{x} is an odd function. So the Cauchy Principal Value of \int^{\infty}_{-\infty} \frac{cos(x)}{x} dx = 0

We can now use the imaginary parts and note that the integrals converge (as previously noted) and so:

\int^{\infty}_{-\infty} i\frac{sin(x)}{x} dx = i \pi \rightarrow \int^{\infty}_0 \frac{sin(x)}{x}dx = \frac{\pi}{2}

More about residue calculation

Ok, what about the case of, say, tan(z) = \frac{sin(z)}{cos(z)} at latex z = \frac{\pi}{2}} $? One way: note lim_{z \rightarrow \frac{\pi}{2}} \frac{sin(z)(z-\frac{\pi}{2})}{cos(z)} can be evaluated by L’Hopital’s rule which leads to lim_{z \rightarrow \frac{\pi}{2}}\frac{sin(z)+(z-\frac{\pi}{2})cos(z)}{-sin(z)} =-1 and note how the product rule lead to the numerator being what it was. In general, if we are interested in \frac{h(z)}{g(z)} where h(z_0) \neq 0, g(z_0) = 0 The residue will be given by \frac{h(z_0)}{g'(z_0)} provided g'(z_0) \neq 0 ; that is, provided g has a zero of order 1 at z_0

Many residue calculation theorems work in such a manner.

In general, if f has a pole of order m at z=z_0 then f(z) = \frac{b_m}{(z-z_0)^m} + \frac{b_{m-1}}{(z-z_0)^{m-1}} + ...+\frac{b_1}{z-z_0} + .... so if we multiply both sides by (z-z_)^m and then differentiate repeatedly and focus on the residue: b_1(z-z_0)^{m-1} gets differentiated m-1 times to yield (m-1)(m-2)....(2)(1) b_1 so b_1 = \frac{1}{(n-1)!}\frac{d^{m-1}}{dz^{m-1}}((z-z_0)^{m-1}f(z)

Example: \frac{1}{z^2 (z+1)} has a pole of order 2 at the origin. So the residue: (\frac{z^2}{z^2(z+1)})' = -\frac{1}{(z+1)^2} which is -1 at z = 0 . That is simpler than calculating the series.

There are many other formulas that we could develop but most are some combination of the ideas that we just discussed. And sometimes, it is best to just grit your teeth and calculate the series.

Residues: integrals, calculation, etc.

Here is the idea: we’d like (relatively) easy ways to integrate functions around simple closed curves. For the purposes of this particular note, we assume the usual about the closed curve (piecewise smooth, once around in the standard direction) and that the function in question does not have singularities along the said closed curve and only a finite number of isolated singularities in the simply connected region enclosed by the curve. If we have no singularities in the said region, then our integral is zero and there is nothing more to do.

Now by the theory of cuts, we can assume that the integral around the given simple closed curve is equal to the sum of the integrals of the function around small circles surrounding each singular point.

So, integrals of this type just boil down to calculating the integrals of the function around these isolated singularities and then just adding.

So lets examine a typical singularity: say z_0 . Let R done the circle of radius R surrounding z_0 and assume that R is small enough that we don’t enclose any other singularity.

Now on a punctured disk (excluding z_0 ) bounded by this circle, we have

f(z) = \sum^{\infty}_{k=1} b_k \frac{1}{(z-z_0)^k} + \sum^{\infty}_{k=0} c_k(z-z_0)^k . Now we have

\int_R f(z)dz = \int_R \sum^{\infty}_{k=1} b_k \frac{1}{(z-z_0)^k} + \sum^{\infty}_{k=0} c_k(z-z_0)^k

= \int_R \sum^{\infty}_{k=1} b_k \frac{1}{(z-z_0)^k} dz + \int_R \sum^{\infty}_{k=0} c_k(z-z_0)^k dz

Now the second integral is the integral of a power series (the regular part of the Laurent series) and therefore the integral is zero since we are integrating along a simple closed curve. So only the integral of the principal part matters.

Now we can interchange integration and summation by the work we did to develop the Laurent series to begin with.

Doing that we obtain:

\int_R f(z)dz =  \sum^{\infty}_{k=1} \int_R b_k \frac{1}{(z-z_0)^k} dz and keep in mind that the b_k are constants. So we have:

\int_R f(z)dz =  \sum^{\infty}_{k=1} b_k \int_R  \frac{1}{(z-z_0)^k} dz and so we need only evaluate

\int_R \frac{1}{(z-z_0)^k} dz to evaluate our integral.

Now for k \in \{2, 3, 4, ... \} we note that \frac{1}{(z-z_0)^k} has a primitive in the punctured disk. OR..if you don’t like that, you can do the substitution z(t) = Re^{it} +z_0, dz = Rie^{it}dt, t \in [0, 2 \pi] and find that these integrals all evaluate to zero.

So only the integral \int_R \frac{1}{z-z_0} dz matters and the aforementioned substitution shows that this integral evaluates to 2 \pi i

So all of this work shows that \int_R f(z) dz = 2 \pi i b_1 ; only the coefficient of the \frac{1}{z-z_0} term matters.

b_1 is called the residue of f at z_0 and sometimes denoted Res(f,z_0)

So now we see that the integral around our original closed curve is just 2 \pi i \sum_k Res(f, z_k) , that is, 2 \pi i times the sum of the residues of f which are enclosed by our closed curve.

This means that it would be good to have “easy” ways to calculate residues of a function.

Now of course, we could always just calculate the Laurent series of the function evaluated at each singularity in question. SOMETIMES, this is easy. And if the singularity in question is an essential one, it is what we have to do.

But sometimes the calculation of the series is tedious and it would be great to have another method.

The idea
Ok, suppose we have a simple pole at z_0 (we’ll build upon this idea). This means that f(z) = b_1\frac{1}{z-z_0} + \sum^{\infty}_{k=0} c_k (z-z_0)^k Now note that (z-z_0)f(z) = b_1 +  \sum^{\infty}_{k=0} c_k (z-z_0)^{k+1} so we obtain:

b_1 = lim_{z \rightarrow z_0} (z-z_0)f(z)

Example: calculate the residue of f(z) = \frac{1}{sin(z)} at z = 0 .

lim_{z \rightarrow 0}z\frac{1}{sin(z)} =lim_{z \rightarrow 0}\frac{z}{sin(z)} =1 .

Now sometimes this is straight forward, and sometimes it isn’t.

Now if f(z) = \frac{1}{g(z)} where g(z) has an isolated zero at z = z_0 (as is the case when we have a pole), we can then write (z-z_0) f(z) = \frac{z-z_0}{g(z)} where both the numerator and denominator functions are going to zero.

This should remind you of L’Hopital’s rule from calculus. So we should derive a version for our use.

Now if both g, h have isolated zeros of finite order at z_0 then

\frac{h(z_0+h)}{g(z_0+h)} =  \frac{h(z_0+h) +h(z_0)}{g(z_0+h) + g(z_0)} (because h(z_0)=g(z_0)=0 )

= \frac{\frac{h(z_0+h) +h(z_0)}{h}}{\frac{g(z_0+h) + g(z_0)}{h}} Now take a limit as h \rightarrow 0 to obtain \frac{h'(z_0)}{g'(z_0)} which can be easier to calculate.

If we apply this technique to \frac{1}{sin(z)} we get \frac{(z)'}{sin(z)'} = \frac{1}{cos(z)} which is 1 at z = 0

Ok, my flight is boarding so I’ll write part II later.