Onward to the argument principle and Rouche’s Theorem

Consider the function $f(z) = z^3$ and look at what it does to values on the circle $z(t) = e^{it}, t \in [0, 2\pi)$. $f(e^{it}) = e^{i3t}$ and as we go around the circle once, we see that $arg(f(z)) = arg(e^{i3t})$ ranges from $0$ to $6 \pi$. Note also that $f$ has a zero of order 3 at the origin and $(2 \pi)3 = 6 \pi$ That is NOT a coincidence.

Now consider the function $g(z) = \frac{1}{z^3}$ and look at what it does to values on the circle $z(t) = e^{it}, t \in [0, 2\pi)$. $g(e^{it}) = e^{-i3t}$ and as we go around the circle once, we see that $arg(g(z)) = arg(e^{-i3t})$ ranges from $0$ to $-6 \pi$. And here, $g$ has a pole of order 3 at the origin. This too, is not a coincidence.

We can formalize this somewhat: in the first case, suppose we let $\gamma$ be the unit circle taken once around in the standard direction and let’s calculate:

$\int_{\gamma} \frac{f'(z)}{f(z)} dz = \int_{\gamma}\frac{3z^2}{z^3}dz = 3 \int_{\gamma} \frac{1}{z}dz = 6 \pi i$

In the second case: $\int_{\gamma} \frac{g'(z)}{g(z)} dz = \int_{\gamma}\frac{-3z^{-4}}{z^{-3}}dz = -3 \int_{\gamma} \frac{1}{z}dz = -6 \pi i$

Here is what is going on: you might have been tempted to think $\int_{\gamma} \frac{f'(z)}{f(z)} dz = Log(f(z))|_{\gamma} = (ln|f(z)| +iArg(f(z)) )|_{\gamma}$ and this almost works; remember that $Arg(z)$ switches values abruptly along a ray from the origin that follows the negative real axis..and any version of the argument function must do so from SOME ray from the origin. The real part of the integral (the $ln(|f(z)|$ part) cancels out when one goes around the curve. The argument part (the imaginary part) does not; in fact we pick up a value of $2 \pi i$ every time we cross that given ray from the origin and in the case of $f(z) = z^3$ we cross that ray 3 times.

This is the argument principle in action.

Now of course, this principle can work in the vicinity of any isolated singularity or zero or along a curve that avoids singularities and zeros but encloses a finite number of them. The mathematical machinery we develop will help us with this concept.

So, let’s suppose that $f$ has a zero of order $m$ at $z = z_0$. This means that $f(z) = (z-z_0)^m g(z)$ where $g(z_0) \neq 0$ and $g$ is analytic on some open disk about $z_0$.

Now calculate: $\frac{f'(z)}{f(z)} dz = \frac{m(z-z_0)^{m-1} g(z) - (z-z_0)^m g'(z)}{(z-z_0)^m g(z)} = \frac{m}{z-z_0} + \frac{g'(z)}{g(z)}$. Now note that the second term of the sum is an analytic function; hence the Laurent series for $\frac{f'(z)}{f(z)}$ has $\frac{m}{z-z_0}$ as its principal part; hence $Res(\frac{f'(z)}{f(z)}, z_0) = m$

Now suppose that $f$ has a pole of order $m$ at $z_0$. Then $f(z) =\frac{1}{h(z)}$ where $h(z)$ has a zero of order $m$. So as before write $f(z) = \frac{1}{(z-z_0)^m g(z)} = (z-z_0)^{-m}(g(z))^{-1}$ where $g$ is analytic and $g(z_0) \neq 0$. Now $f'(z) = -m(z-z_0)^{-m-1}g(z) -(g(z))^{-2}g'(z)(z-z_0)^{-m}$ and
$\frac{f'(z)}{f(z)} =\frac{-m}{z-z_0} - \frac{g'(z)}{g(z)}$ where the second term is an analytic function. So $Res(\frac{f'(z)}{f(z)}, z_0) = -m$

This leads to the following result: let $f$ be analytic on some open set containing a piecewise smooth simple closed curve $\gamma$ and analytic on the region bounded by the curve as well, except for a finite number of poles. Also suppose that $f$ has no zeros on the curve.

Then $\int_{\gamma} \frac{f'(z)}{f(z)} dz = 2 \pi i (\sum^k_{j =1} m_k - \sum^l_{j = 1}n_j )$ where $m_1, m_2...m_k$ are the orders of the $k$ zeros of $f$ inside of $\gamma$ and $n_1, n_2.., n_l$ are the orders of the poles of $f$ inside $\gamma$.

This follows directly from the theory of cuts:

Use of our result: let $f(z) = \frac{(z-i)^4(z+2i)^3}{z^2 (z+3i-4)}$ and let $\Gamma$ be a circle of radius 10 (large enough to enclose all poles and zeros of $f$. Then $\int_{\Gamma} \frac{f'(z)}{f(z)} dz = 2 \pi i (4 + 3 -2-1) = 8 \pi i$. Now if $\gamma$ is a circle $|z| = 3$ we see that $\gamma$ encloses the zeros at $i, -2i,$ and the pole at 0 but not the pole at $4-3i$ so $\int_{\Gamma} \frac{f'(z)}{f(z)} dz = 2 \pi i (4+3 -2) = 10 \pi i$

Now this NOT the main use of this result; the main use is to describe the argument principle and to get to Rouche’s Theorem which, in turn, can be used to deduce facts about the zeros of an analytic function.

Argument principle: our discussion about integrating $\frac{f'(z)}{f(z)}$ around a closed curve (assuming that the said curve runs through no zeros of $f$ and encloses a finite number of poles ) shows that, as we traverse the curve, the argument of the function changes by $2 \pi (\text{ no. of zeros - no. of poles})$ where the zeros and poles are counted with multiplicities.

Example: consider the function $f(z) = z^8 + z^2 + 1$. Let’s find how many zeros it has in the first quadrant.

If we consider the quarter circle of very large radius $R$ (that stays in the first quadrant and is large enough to enclose all first quadrant zeros) and note $f(Re^{it}) = R^8e^{i8t}(1+ \frac{1}{R^6}e^{-i6t} + \frac{1}{R^8 e^{i8t}})$ we see that the argument changes by about $8(\frac{\pi}{2} = 4 \pi$. The function has no roots along the positive real axis. Now setting $z = iy$ to run along the positive imaginary axis we get $f(iy) = y^8 -y^2 + 1$ which is positive for large $R$, has one relative minimum at $2^{\frac{-1}{3}}$ which yields a positive number, and is zero at $z = 0$. So the argument stays at $4 \pi$ so, we get $4 \pi = 2 \pi (\text{no. of roots in the first quadrant})$ which means that we have 2 roots in the first quadrant.

In fact, you can find an online calculator which estimates them here.

Now for Rouche’s Theorem
Here is Rouche’s Theorem: let $f, g$ be analytic on some piecewise smooth closed curve $C$ and on the region that $C$ encloses. Suppose that, on $C$ we have $|f(z) + g(z)| < |f(z)|$. Then $f, g$ have the same number of zeros inside $C$. Note: the inequality precludes $f$ from having a zero on $C$ and we can assume that $f, g$ have no common zeros, for if they do, we can “cancel them out” by, say, writing $f(z) = (z-z_0)^m f_1(z), g(z) = (z-z_0)^mg_1(z)$ at the common zeros. So now, on $C$ we have $|1 + \frac{g(z)}{f(z)}| < |1|$ which means that the values of the new function $\frac{g(z)}{f(z)}$ lie within the circle $|w+1| < 1$ in the domain space. This means that the argument of $\frac{g(z)}{f(z)}$ has to always lie between $\frac{\pi}{2}$ and $\frac{3 \pi }{2}$ This means that the argument cannot change by $2 \pi$ so, up to multiplicity, the number of zeros and poles of $\frac{g(z)}{f(z)}$ must be equal. But the poles come from the zeros of the denominator and the zeros come from the zeros of the numerator.

And note: once again, what happens on the boundary of a region (the region bounded by the closed curve) determines what happens INSIDE the curve.

Now let’s what we can do with this. Consider our $g(z) = z^8 + z^2 + 1$. Now $|z^8 -(z^8 + z^2 + 1)| =|z^2+1| < |z^8|$ for $R = \frac{3}{2}$ (and actually smaller). This means that $z^8$ and $z^8+z^2 + 1$ have the same number of roots inside the circle $|z| = \frac{3}{2}$: eight roots (counting multiplicity). Now note that $|z^8 +z^2 + 1 -1| = |z^8+z^2| < |1|$ for $|z| \leq \frac{2}{3}$ So $z^8 +z^2 + 1$ and $1$ have the same number of zeros inside the circle $|z| = \frac{2}{3}$ This means that all of the roots of $z^8+z^2 + 1$ lie in the annulus $\frac{2}{3} < z < \frac{3}{2}$

A summary of some integral theorems

This post will contain no proofs but rather, statements of theorems that we will use. Note: all curves, unless stated otherwise, will be piecewise smooth and taken in the standard direction.

1. Given $f$ analytic on some open domain $D$ and $\gamma$ a simple closed curve in $D$ whose bounded region is also in $D$. Then $\int_{\gamma}f(w)dw = 0$

Note: the curve in question is a simple closed curve whose bounded region is in a domain where $f$ is analytic.

So, we note that $f(w) = \frac{1}{w}$ is analytic in the open annulus $A= \{z| 1 < |z| < 3 \}$ and the curve $|z|=2$ lies in $A$ but $\int_{|z|=2} \frac{1}{w} dw = 2 \pi i \neq 0$. The reason this does not violate this result is that the region bounded by the curve is $\{z| |z| < 2 \}$ is NOT contained in $A$.

2. If $f$ is analytic within a simply connected open domain $D$ and $\gamma$ is a closed curve (not necessarily a simple closed curve; $\gamma$ might have self intersections). then $\int_{\gamma} f(w)dw = 0$. Note that this result follows from a careful application of 1. This also shows that $\gamma, \alpha$ two paths connecting, say, $w_0$ to $z_0$ in $D$ then $\int_{\gamma}f(w)dw = \int_{\alpha} f(w) dw$. That is, the integrals are path independent.

Why this is useful: suppose $f$ is NOT analytic at, say, $w_0$ but is analytic everywhere else in some open domain $D$ which contains $w_0$. Now let $\gamma, \alpha$ be two disjoint simple closed curves whose bounded regions contain $w_0$. Then $\int_{\gamma} f(w)dw = \int_{\alpha} f(w) dw$ even though the integrals might not be zero.

The curve formed by connecting $\gamma$ to $\alpha$ by a cut line (in green..and going backwards on $\alpha$ is NOT a simple closed curve, but it is a piecewise smooth closed curve which bounds a simply connected region which excludes the point where $f$ is not analytic; hence the integral along this curve IS zero. So by subtracting off the integral along the cut lines (you go in opposite directions) yields the equality of the integrals.

3. If $f$ is analytic on a simply connected domain, then $f$ has a primitive there (aka: “anti derivative”). That is, there is some $F$ where $F' =f$ on that said domain.

Note: you need “simply connected” here as $\frac{1}{z}$ is analytic on $C - \{0\}$ but has no primitive ON $C - \{0\}$.
But $\frac{1}{z}$ does have a primitive on, say, $\{z| Im(z) > 0 \}$ ($Log(z)$ is one such primitive)

4. If $f$ has a primitive on an open domain and $\gamma$ is a closed curve on that domain, then $\int_{\gamma} f(w)dw = 0$
this follows from our “evaluation of an integral by a primitive” theorem. And note: the domain does NOT have to be simply connected.

Example: $\frac{1}{z^2}$ has a primitive on $C - \{0\}$ so if $\gamma$ is any closed curve that does not run through the origin, $\int_{\gamma} \frac{1}{z^2} dz = 0$. But this does NOT work for $\frac{1}{z}$ as the candidate for a primitive is a branch of the log function, which must have a discontinuities on some infinite ray (possibly not straight) whose endpoint is on the origin.

Stuff you have to KNOW …

Learning mathematics can be puzzling at times. Yes, insight and creativity is often called for. But there are some tools of the subject which must be mastered for creativity to be productive.

So, here is my opinion on some of what MUST be mastered for success in this course.

1. Series. Series are much more important in complex variable calculus than they are in real variable calculus, and there are some subtleties. But with regards to series:

a. Absolute convergence: a series of absolute values can be treated like a series of non-zero real numbers because that is what it is! So know the root test, ratio test, geometric series test, and the various comparison tests (limit comparison, basic comparison).

b. Now with regards to absolute values: remember $|z| +|w| \geq |z+w|$ and $|z-w| \geq ||z| - |w||$. That will be useful for comparison tests and the like. Also $|w| \geq |Re(w)|, |w| \geq |Im(w)|$

c. I cannot stress this enough: you must KNOW the geometric series formula; we will use this over and over again. That is, if $|w| < 1, \sum^{\infty}_{k=0} w^k = \frac{1}{1-w}$. You must KNOW this.

d. In a couple of lessons, we will talk about power series. These will be super important as well.

2. Integrals: if you integrate over a line segment $a + bi$ to $r + si$ a parameterization that works is $x(t) = a +(r-a)t, y(t) = b + (s-b)t, dx = (r-a)dt, dy = (s-b)dt, t \in [0,1]$ and the integral becomes $\int^1_0 (u(x(t), y(t)) + i(v(x(t), y(t)))(dx + idy)$ You have to do this for each line segment unless there is some trick that enables you to use Green’s Theorem, which is something you should know.

If you are integrating around a circle of radius $R$ about point $z_0$ the path is given by $z(t) = Re^{it} + z_0, z'(t) =iRe^{it}dt , t \in [0, 2\pi]$

3. Derivatives: you need to know the Cauchy-Riemann equations:$u_x = v_y, u_y = -v_x$ and that if a function is differentiable at a point, the C-R equations are satisfied there. If the C-R equations are satisfied on some open set and the first partials are continuous at the point in question, then $f$ is analytic on that open set. It is possible for a function to be differentiable at one point only.

4. Of course, one must know the basic definitions of the complex log, exponential, sine and cosine functions (and it is useful to know them in $u(x,y) +iv(x,y)$ form as well.

There will be more to come.

The complex derivative and the Cauchy-Riemann equations

The definition of derivative for a complex function looks very familiar: $f'(z) = \lim_{\Delta z \rightarrow 0} \frac{f(z+ \Delta z) - f(z)}{\Delta z}$ provided the limit exists. It is a straightforward consequence of the binomial theorem that polynomials are differentiable everywhere, and the calculus theorems about the quotient, product and chain rules follow with no difficulties as complex variable algebra is extremely similar to real variable algebra.

As far as other functions: let us take a look:

For $f(z) = |z|$ we look at the difference quotient: $\frac{|z+ \Delta z| - |z|}{\Delta z}$ and we get a bit of a mess unless $z = 0$ bit even there, the function fails to be differentiable (think of the calculus absolute value function).

Mind you, this is a relatively simple function. But we can find help by doing the following:

regard $f(z) = f(x+iy) = u(x,y) + iv(x,y)$. Now note the following:

1. IF $f$ is differentiable at $z$ when look at the difference quotient: (let $\Delta z = \Delta x + i \Delta y$ )

$\frac{u(x + \Delta x, y + \Delta y) + i v(x + \Delta x, y + \Delta y) -(u(x,y) + i v(x,y))}{\Delta x + i \Delta y}$

Key observation:if the limit as $\Delta x + i\Delta y$ exists then that limit must be the same no matter how zero is approached.

First, let $\Delta x + i \Delta y$ approach zero by setting $\Delta y = 0$ and we get

$\frac{u(x + \Delta x, y ) + i v(x + \Delta x, ) -(u(x,y) + i v(x,y))}{\Delta x }$ which is the difference quotient for $u_x + iv_x$

Now let’s try again, but this time set $\Delta x = 0$ and look at the difference quotient:

$\frac{u(x , y + \Delta y) + i v(x , y + \Delta y) -(u(x,y) + i v(x,y))}{ i \Delta y}$

Which is the difference quotient for $-i u_y + v_y$ Now equate the real and imaginary parts of these expressions to obtain:

$v_y = u_x, -u_y = v_x$ which are the Cauchy-Riemann equations.

A nice aside: we now know that $f'(z) = u_x + i v_x = v_y -iu_y = u_x - iu_y = v_y + i v_x$

So IF $f$ is differentiable then the C-R equations are satisfied.

Going the other way is more tricky; we have to use results from the Taylor Series of two variables to argue that: IF the partials of $u, v$ are continuous on some open set and the C-R equations are satisfied on that open set, then $f$ is differentiable on that open set (and called “analytic”).

Now if the partials merely exist on that open set and the C-R equations are satisfied at a point and the partials are continuous at that point, then the function is differntiable at that point.

Now let’s tackle some examples:

1. $f(z) = \overline{z}. Here$latex u = x, v = -y, u_x = 1, v_y = -1 \$ and so $f$ is not differentiable anywhere.

2. $f(z) = |z|^2$. Here $u = x^2 + y^2, v = 0, u_x = 2x, u_y = 2y$ and so C-R are satisfied a the origin.

3. $f(z) = |z|$ Here $u = \sqrt{x^2+y^2} , v = 0$ so $u_x = \frac{x}{\sqrt{x^2 + y^2}}, u_y = \frac{y}{\sqrt{x^2 + y^2}}$ and these are never zero at the same time so this function is not differntiable anywhere.

4. $f(z) = e^z = e^x cos(y) + e^x sin(y)$ then the C-R equations are satisfied everywhere and using $f'(z) = u_x + i v_x$ we see that the derivative is just $e^z$, as expected.

5. So we can differentiate $cos(z), sin(z)$ by using the definition (e. g. $cos(z) = \frac{e^{iz} + e^{-iz}}{2}, sin(z) = \frac{e^{iz} - e^{-iz}}{2i}$ ) or the definitions involving the real and imaginary parts.
We get what we expect.

6. Find the derivative for $Log(z)$ we can use the chain rule OR the real and imaginary parts where $Log(x+iy) = \frac{1}{2} ln(x^2 + y^2) + i arctan(\frac{y}{x})$ which is an interesting exercise.

The complex log function

Now that we have a complex exponential function, it is time for a complex log function. Now note that the fundamental domains for the complex exponential functions are horizontal strips which span $2 \pi$ in the imaginary (y-axis) direction. The exponential, applied to such a region, covers all of the complex numbers except for the origin.

So, in choosing the log function, we need to choose what we want its range to be; that is called “selecting the branch of the log function”.

Now let’s review a picture of what the exponential function does:

So, the log ought to take rays through the origin to horizontal lines, and circles about the origin to vertical lines.

What works: $log(z) = ln|z| + iarg(z)$ and, if desired, we can select a principal branch $Log(z) = ln|z| + i Arg(z)$. Note: the log is not defined at the origin.

Now does this do what we want it to do?

Note $log (e^z) = ln|e^z| + iarg(e^z) = ln|e^x e^{iy}| + i arg(e^x e^{iy}) = ln(e^x) + i arg(e^{iy}$ which is almost $z$; this only works “mod $2 \pi i$” because of the ambiguity of the argument. If this sounds opaque: note that $e^0 = e^{2 \pi i}$ and applying the log function to both sides of the equation will yield the same result. So this direction works “set wise”.

So that part works well. To see the other direction: $e^{log(z)} = e^{ln|z|}e^{iarg(z)}= |z|e^{i arg(z)} = z$ (think in terms of polar coordinates). Note: this direction works “on the nose”.

With this in mind, try: $log(z w) = log(z) + log (w)$ (hint: wrote $z, w$ in polar coordinates. Note that this works “as sets”; this does not work for all non-zero $z, w$ if we use $Log(z)$.

Now we can define $z^w = e^{w log(z)}$ Note: this definition is set valued.

Let’s see why with an example: $2^i = e^{ilog(2)} = e^{i (2 + 2k \pi i} = e^{2i -2k \pi} = e^{-2k \pi} e^{2i}$ where $k \in \{...-2, -1, 0, 1, 2....\}$. So we have an infinite number of values.

We can define a principal value by making an appropriate choice of the log function.

Note: this fits right in with our roots of unity discussion. Example: $1^{\frac{1}{3}} = e^{\frac{1}{3} log(1)} = e^{\frac{1}{3}(ln|1| + i (2k \pi)} = e^{\frac{1}{3} 2k \pi i}$ for $k \in \{0, 1, 2\}$ just as before.