Residue integral calculation, Cauchy Principal Value, etc.

First: the Cauchy Principal Value. Remember when you were in calculus and did \int^{\infty}_{-\infty} \frac{dx}{1+x^2} ? If you said something like lim_{b \rightarrow \infty} \int^{b}_{-b} \frac{1}{1+x^2} dx you lost some credit and said that the integral converged if and only if both lim_{a \rightarrow \infty} \int^{a}_{0} \frac{1}{1+x^2} dx AND lim_{b \rightarrow \infty} \int^{0}_{-b} \frac{1}{1+x^2} dx BOTH converged..the limits had to be independent of one another.

That is correct, of course, but IF you knew that both integrals converged then, in fact, lim_{b \rightarrow \infty} \int^{b}_{-b} \frac{1}{1+x^2} dx gave the correct answer.

As for why you need convergence: lim_{b \rightarrow \infty} \int^{b}_{-b} x dx = lim_{b \rightarrow \infty}|^b_{-b} \frac{x^2}{2} =0 so while the integral diverges, this particular limit is zero. So this limit has a name: it is called the Cauchy Principal Value of the integral and is equal to the value of the improper integral PROVIDED the integral converges.

We will use this concept in some of our calculations.

One type of integral: Let f(z) be a function with at most a finite number of isolated singularities, none of which lie on the real axis. Suppose for R large enough, |f(Re^{it})| \leq \frac{|M|}{R^p} for p > 1 . Let H denote the upper half plane (Im(z) > 0 ) Then \int^{\infty}_{-\infty} f(z) dz = 2 \pi i \sum \{ \text{residues in } H\}

The idea: let C represent the contour shown above: upper half of the circle |z| = R followed by the diameter, taken once around in the standard direction. Of course \int_C f(z) dz = 2 \pi i \sum \{ \text{residues enclosed by } C \}  and if R is large enough, the contour encloses all of the singularities in the upper half plane.

But the integral is also equal to the integral along the real axis followed by the integral along the upper half circle. But as far as the integral along the upper half circle (call it \Gamma )

|\int_{\Gamma} f(z) dz | \leq \frac{|M|}{R^p} \pi R = \frac{|M|}{R^{p-1}} \rightarrow 0 as R \rightarrow \infty because p-1 > 0 . The bound comes from the “maximum of the function being integrated times the arc length of the integral”.

So the only non-zero part left is lim_{R \rightarrow \infty} \int^{R}_{-R} f(z) dz which is the Cauchy Principal value, which, IF the integral is convergent, is equal to 2 \pi i \sum \{ \text{residues in } H\} .

Application: \int^{\infty}_{-\infty} \frac{x^2}{x^4+1} dx Note that if f(z) = \frac{z^2}{z^4+1} then |f(Re^it)| \leq \frac{R^2}{R^4-1} which meets our criteria. So the upper half plane singularities are at z_0 = e^{i\frac{\pi}{4}}, z_1 = e^{i\frac{3\pi}{4}} ; both are simple poles. So we can plug these values into \frac{z^2}{4z^3} = \frac{1}{4}z^{-1} (the \frac{h}{g'} formula for simple poles)

So to finish the integral up, the value is 2 \pi i \frac{1}{4}((e^{i\frac{\pi}{4}})^{-1} +(e^{i\frac{3\pi}{4}})^{-1} ) = \frac{1}{2} \pi i (-2i\frac{1}{\sqrt{2}}) = \frac{\pi}{\sqrt{2}}

Now for a more delicate example
This example will feature many concepts, including the Cauchy Principal Value.

We would like to calculate \int^{\infty}_0 \frac{sin(x)}{x} dx with the understanding that lim_{x \rightarrow 0} \frac{sin(x)}{x} = 1 so we assume that \frac{sin(0)}{0} \text{``="} 1 .

First, we should show that \int^{\infty}_0 \frac{sin(x)}{x} dx converges. We can do this via the alternating series test:

\int^{\infty}_0 \frac{sin(x)}{x} dx = \sum^{\infty}_{k=0} \int_{\pi k}^{\pi (k+1)} \frac{sin(x)}{x} dx which forms an alternating series of terms whose magnitudes are decreasing to 0 (we are adding the signed areas bounded by the graph of \frac{sin(x)}{x} and the x axis and note that the signed areas alternate between positive and negative and the absolute values of the areas decrease to zero as |\frac{1}{x}| \geq |\frac{sin(x)}{x} | for x > 0

Above: we have the graphs of Si(x) = \int^x_0 \frac{sin(t)}{t} dt \text{ and } \frac{sin(x)}{x} .

Aside: note that \int^{\infty}_0 |\frac{sin(x)}{x}| dx diverges.
Reason: |  \int^{(k+1) \pi}_{k \pi} |\frac{sin(x)}{x}| dx > |\frac{1}{(k+1) \pi}| \int^{(k+1) \pi}_{k \pi}|sin(x)| dx =|\frac{2}{(k+1) \pi}| and these form the terms of a divergent series.

So we know that our integral converges. So what do we do? Our basic examples assume no poles on the axis. And we do have the trig function.

So here is what turns out to work: use f(z) = \frac{e^{iz}}{z} . Why? For one, along the real axis we have \frac{e^{ix-y}}{x} = \frac{e^{ix}}{x} =\frac{cos(x)}{x} + i \frac{sin(x)}{x} and so what we want will be the imaginary part of our expression..once we figure out how to handle zero AND the real part. Now what about our contour?

Via (someone else’s) cleverness, we use:

We have a small half circle around the origin (upper half plane), the line segment to the larger half circle, the larger half circle in the upper half plane, the segment from the larger half circle to the smaller one. If we call the complete contour \gamma then by Cauchy’s theorem \int_{\gamma} \frac{e^{iz}}{z} dz = 0 no matter what r \text{ and } R are provided 0 < r < R . So the idea will be to let r \rightarrow 0, R \rightarrow \infty and to see what happens.

Let L stand for the larger half circle and we hope that as R \rightarrow \infty we have \int_L \frac{e^{iz}}{z} dz \rightarrow 0

This estimate is a bit trickier than the other estimates.

Reason: if we try |\frac{e^{iRe^{it}}}{Re^{it}}| = |\frac{e^{iRcos(t)}e^{-Rsin(t)}}{Re^{it}} = |\frac{e^{-Rsin(t)}}{R}| which could equal |\frac{1}{R}| when t = 0, t = \pi . Then multiplying this by the length of the curve (\pi R ) we simply get \pi which does NOT go to zero.

So we turn to integration by parts: \int_L \frac{e^{iz}}{z} dz =  and use u = \frac{1}{z}, dv = e^{iz}, du = -\frac{1}{z^2}, v = -ie^{iz} \rightarrow \int_L \frac{e^{iz}}{z} dz= -ie^{iz}\frac{1}{z}|_L - i \int_L \frac{e^{iz}}{z^2} dz

Now as R \rightarrow \infty we have |-ie^{iz}\frac{1}{z}|_L | \leq |\frac{1}{R}| which does go to zero.

And the integral: we now have a z^2 in the denominator instead of merely z hence

|\int_L \frac{e^{iz}}{z^2} dz | \leq |\frac{1}{R^2}| \pi R = \frac{\pi} {R} which also goes to zero.

So we have what we want on the upper semicircle.

The lower semicircle We will NOT get zero as we integrate along the lower semicircle and let the radius shrink to zero. Let’s denote this semicircle by l (and note that we are going in the opposite of the standard direction) so we are really calculating $latex -\int_l \frac{e^{iz}}{z} dz  . This is not an elementary integral. But what we can do is expand \frac{e^{iz}}{z} into its Laurent series centered at zero and obtain:
\frac{e^{iz}}{z} = \frac{1}{z} (1 + (iz) + (iz)^2 \frac{1}{2!} + (iz)^3\frac{1}{3!} + ...) = (\frac{1}{z} + i -z \frac{1}{2} +-z^2 \frac{1}{3!} + ...) = \frac{1}{z} + g(z) where g(z) represents the regular part..the analytic part.

So now -\int_l \frac{e^{iz}}{z} dz = -\int_l \frac{1}{z} dz  -\int_l g(z) dz . Now the second integral goes to zero as r \rightarrow 0 because it is the integral of an analytic function over a smooth curve whose length is shrinking to zero. (or alternately: whose endpoints are getting closer together; we’d have G(-r) - G(r) where G is a primitive of g and -r \rightarrow r .)

The part of the integral that survives is -\int_l \frac{1}{z} dz = -\pi i for ANY r (check this: use z(t) = re^{it}, t \in [0, \pi] .

So adding up the integrals (I am suppressing the integrand for brevity)

\int_L + \int^{-r}_{-R} -\int_l + \int^{R}_r = 0 \rightarrow \int^{-R}_{r} + \int^{R}_r = \int_l = \pi i which is true for r \rightarrow 0, R \rightarrow \infty .

So we have lim_{R \rightarrow \infty, r \rightarrow 0 } \int^{-r}_{-R} \frac{cos(x)}{x} + i \frac{sin(x)}{x} dx +  \int^{R}_{r} \frac{cos(x)}{x} + i \frac{sin(x)}{x} dx =\pi i

Now for the cosine integral: yes, as r \rightarrow 0 this integral diverges. BUT note that for all finite R, r > 0 we have:\int^{-r}_{-R} \frac{cos(x)}{x}  +  \int^{R}_{r} \frac{cos(x)}{x}  dx = 0 as \frac{cos(x)}{x} is an odd function. So the Cauchy Principal Value of \int^{\infty}_{-\infty} \frac{cos(x)}{x} dx = 0

We can now use the imaginary parts and note that the integrals converge (as previously noted) and so:

\int^{\infty}_{-\infty} i\frac{sin(x)}{x} dx = i \pi \rightarrow \int^{\infty}_0 \frac{sin(x)}{x}dx = \frac{\pi}{2}

Advertisements

Fresnel Integrals

In this case, we want to calculate \int^{\infty}_0 cos(x^2) dx and \int^{\infty}_0 sin(x^2) dx . Like the previous example, we will integrate a specific function along a made up curve. Also, we will want one of the integrals to be zero. But unlike the previous example: we will be integrating an analytic function and so the integral along the simple closed curve will be zero. We want one leg of the integral to be zero though.

The function we integrate is f(z) = e^{iz^2} . Now along the real axis, y = 0 and so e^{i(x+iy)^2} =e^{ix^2} = cos(x^2) + isin(x^2) and dz = dx So the integral along the positive real axis will be the integrals we want with \int^{\infty}_0 cos(x^2) dx being the real part and \int^{\infty}_0 sin(x^2) dx being the imaginary part.

So here is the contour

Now look at the top wedge: z = te^{i \frac{\pi}{4}}, t \in [0,R] (taken in the “negative direction”)

So z^2 = t^2 e^{i \frac{\pi}{2}} =t^2 (0 + isin(\frac{\pi}{2}) = it^2 \rightarrow e^{iz^2} = e^{i^2t^2} = e^{-t^2}

We still need dz = cos(\frac{\pi}{4}) + isin(\frac{\pi}{4}) dt = \frac{\sqrt{2}}{2}(1+i)dt

So the integral along this line becomes - \frac{\sqrt{2}}{2}(1+i)\int^{R}_0 e^{-t^2} dt = -\frac{\sqrt{\pi}}{2}\frac{\sqrt{2}}{2}(1+i) as R \rightarrow \infty . Now IF we can show that, as R \rightarrow \infty the integral along the circular arc goes to zero, we have:

\frac{\sqrt{\pi}}{2}\frac{\sqrt{2}}{2}(1+i) = \int^{\infty}_0 cos(x^2) dx + i  \int^{\infty}_0 sin(x^2) dx . Now equate real and imaginary parts to obtain:

\int^{\infty}_0 cos(x^2) dx = \frac{\sqrt{\pi}}{2}\frac{\sqrt{2}}{2} = \frac{\sqrt{2 \pi}}{4} and \int^{\infty}_0 sin(x^2) dx =\frac{\sqrt{\pi}}{2}\frac{\sqrt{2}}{2} = \frac{\sqrt{2 \pi}}{4}

So let’s set out to do just that: here z = Re^{it}, t \in [0, \frac{\pi}{4}] so e^{iz^2} = e^{iR^2e^{2it}}e^{iR^2(cos(2t) + isin(2t)} = e^{iR^2(cos(2t)}e^{-R^2sin(2t)} . We now have dz = iRe^{it} dt so now |\int^{\frac{\pi}{4}}_0 e^{iR^2(cos(2t)}e^{-R^2sin(2t)}iRe^{it} dt| \leq \int^{\frac{\pi}{4}}_0|e^{iR^2(cos(2t)} || e^{-R^2sin(2t)} |iRe^{it}| dt =

\int^{\frac{\pi}{4}}_0| e^{-R^2sin(2t)} R dt

Now note: for t \in [0, \frac{\pi}{4}] we have sin(2t) \geq \frac{2}{\pi}t

\rightarrow e^{-R^2 sin(2t)} \leq e^{-R^2\frac{2}{\pi}t} hence

\int^{\frac{\pi}{4}}_0 e^{-R^2sin(2t)} R dt \leq \int^{\frac{\pi}{4}}_0 e^{-R^2frac{2}{\pi}t} R dt = -\frac{1}{R}\frac{\pi}{2}(1-e^{-R^2\frac{1}{2}}) and this goes to zero as R goes to infinity.

Green’s Theorem and complex line integrals…

We will have “the usual” assumptions about the functions in question. So assume that \gamma is a simple closed curve (say, piecewise smooth), taken in the standard counterclockwise direction, which bounds a simply connected region A and (P(x,y), Q(x,y)) is a smooth vector field on some open set containing A .

Then \int_{\gamma} Pdx + Qdy = \int \int_A Q_x - P_y dA

This isn’t the place to prove this in full, but I can sketch out a proof:

1. Prove this for a rectangle (we will do this here)
2. Prove this for a union of rectangles.
3. Show that a region by a piecewise smooth curve can be approximated as closely as desired by a region consisting of a sum of rectangles (think: pixels for a jpg image). So, in the limit, the integral of the given piecewise smooth curve is approximated by the polygonal boundary of the union of rectangles, and the area integral is approximated by the area of the rectangles.

So, to prove Green’s theorem for a rectangle: let the triangle have vertices (a,u), (b,u), (b,v),(a, v) Let A be the rectangle and note that
\int \int_A Q_x - P_y dA =\int^v_u \int^b_a Q_x dxdy - \int^b_a \int^v_u P_y dydx

= \int^v_u (Q(b,y)-Q(a,y)dy -\int^b_a (P(x,v)-P(x,u) dx =

\int^v_u (Q(b,y)dy -\int^v_u Q(a,y)dy + \int^b_a P(x,u) dx - \int^b_a P(x,v) dx

Now let’s do the line integral starting at the lower left corner (a,u) . The boundary is parametrized by 4 curves: x =x, a \leq x \leq b, dx =dx, y =u, dy = 0 , x =b, dx =0, y = y, u \leq y \leq v, dy = dy

x = x, dx = dx, y = v, dy = 0 with x running from b to a “backwards” and then x = a, dx = 0, y = y, dy = dy, y running backwards from v to u. So the line integral becomes \int^b_a P(x,u) dx + \int^v_u Q(b,y) dy - \int^b_a P(x,v)dx - \int^v_u Q(a,y) dy which is what the area integral works out to.

The following figure suggests what goes on when we extend Green’s Theorem to the union of two rectangles.

Ok, what does that have to do with complex integrals? Remember that if \gamma (t) = \alpha (t) + i \beta (t) and \gamma '(t) = \alpha '(t) + i\beta '(t) then \int_{\gamma} f(z) dz  = \int_{\gamma} f(\gamma(t)) \gamma '(t) dt

Now if f(z) = f(x + iy) = u(x,y) + iv(x,y) then \int_{\gamma} f(z) dz = \int_{\gamma} (u(\alpha (t), \beta (t) + i v(\alpha(t), \beta(t))(\alpha ' (t) + i \beta ' (t)) dt

Now in the interest of space, I’ll suppress the \alpha (t), \beta (t) and call \alpha ' (t) dt = dx, \beta '(t)dt = dy and write:

\int_{\gamma} (u + i v)(dx + i dy) = \int_{\gamma} (u dx -v dy) + i(vdx + u dy)

Let’s do an example: let \gamma (t) = t + ti, t \in [0, 1] which gives the line segment running from 0 to 1+ i and let f(z) = z^2 + x^2-y^2 + 2xyi

Then \int_{\gamma} z^2 dz = \int^1_0 ((t^2-t^2) dt -2t^2 dt) + i(2t^2dt + (t^2-t^2) dt) = -\frac{2}{3}t^3 + i\frac{2}{3}t^3 |^1_0 = \frac{2}{3}(-1+i)

Note: \frac{1}{3}(1+i)^3 = \frac{1}{3}(1 +3i -3 -i) = \frac{2}{3}(-1+i)

Now Green’s Theorem doesn’t apply unless the curve is a closed one and bounds a simply connected region in the plane. We assume the the curve goes in the standard direction. IF we meet those two conditions, we can start with”

\int_{\gamma} (u dx -v dy) + i(vdx + u dy) with u taking the place of the P and -v replacing Q in the real part; v replaces P and u replaces Q in the second. Then if A is the region bounded by \gamma we get:

\int \int_A (-v_x - u_y) + i(u_x - v_y) dA = i \int \int_A (u_x -v_y) +i(u_y+v_x) dA which is more significant that it might appear. Your text goes on to calculate this as:

i \int \int_A (u_x + i v_x) + i(u_y +iv_y) dA which the author writes as i \int \int_A f_x + i f_y where f_x = u_x + i v_x, f_y = u_y + i v_y .