# Onward to the argument principle and Rouche’s Theorem

Consider the function $f(z) = z^3$ and look at what it does to values on the circle $z(t) = e^{it}, t \in [0, 2\pi)$. $f(e^{it}) = e^{i3t}$ and as we go around the circle once, we see that $arg(f(z)) = arg(e^{i3t})$ ranges from $0$ to $6 \pi$. Note also that $f$ has a zero of order 3 at the origin and $(2 \pi)3 = 6 \pi$ That is NOT a coincidence.

Now consider the function $g(z) = \frac{1}{z^3}$ and look at what it does to values on the circle $z(t) = e^{it}, t \in [0, 2\pi)$. $g(e^{it}) = e^{-i3t}$ and as we go around the circle once, we see that $arg(g(z)) = arg(e^{-i3t})$ ranges from $0$ to $-6 \pi$. And here, $g$ has a pole of order 3 at the origin. This too, is not a coincidence.

We can formalize this somewhat: in the first case, suppose we let $\gamma$ be the unit circle taken once around in the standard direction and let’s calculate:

$\int_{\gamma} \frac{f'(z)}{f(z)} dz = \int_{\gamma}\frac{3z^2}{z^3}dz = 3 \int_{\gamma} \frac{1}{z}dz = 6 \pi i$

In the second case: $\int_{\gamma} \frac{g'(z)}{g(z)} dz = \int_{\gamma}\frac{-3z^{-4}}{z^{-3}}dz = -3 \int_{\gamma} \frac{1}{z}dz = -6 \pi i$

Here is what is going on: you might have been tempted to think $\int_{\gamma} \frac{f'(z)}{f(z)} dz = Log(f(z))|_{\gamma} = (ln|f(z)| +iArg(f(z)) )|_{\gamma}$ and this almost works; remember that $Arg(z)$ switches values abruptly along a ray from the origin that follows the negative real axis..and any version of the argument function must do so from SOME ray from the origin. The real part of the integral (the $ln(|f(z)|$ part) cancels out when one goes around the curve. The argument part (the imaginary part) does not; in fact we pick up a value of $2 \pi i$ every time we cross that given ray from the origin and in the case of $f(z) = z^3$ we cross that ray 3 times.

This is the argument principle in action.

Now of course, this principle can work in the vicinity of any isolated singularity or zero or along a curve that avoids singularities and zeros but encloses a finite number of them. The mathematical machinery we develop will help us with this concept.

So, let’s suppose that $f$ has a zero of order $m$ at $z = z_0$. This means that $f(z) = (z-z_0)^m g(z)$ where $g(z_0) \neq 0$ and $g$ is analytic on some open disk about $z_0$.

Now calculate: $\frac{f'(z)}{f(z)} dz = \frac{m(z-z_0)^{m-1} g(z) - (z-z_0)^m g'(z)}{(z-z_0)^m g(z)} = \frac{m}{z-z_0} + \frac{g'(z)}{g(z)}$. Now note that the second term of the sum is an analytic function; hence the Laurent series for $\frac{f'(z)}{f(z)}$ has $\frac{m}{z-z_0}$ as its principal part; hence $Res(\frac{f'(z)}{f(z)}, z_0) = m$

Now suppose that $f$ has a pole of order $m$ at $z_0$. Then $f(z) =\frac{1}{h(z)}$ where $h(z)$ has a zero of order $m$. So as before write $f(z) = \frac{1}{(z-z_0)^m g(z)} = (z-z_0)^{-m}(g(z))^{-1}$ where $g$ is analytic and $g(z_0) \neq 0$. Now $f'(z) = -m(z-z_0)^{-m-1}g(z) -(g(z))^{-2}g'(z)(z-z_0)^{-m}$ and
$\frac{f'(z)}{f(z)} =\frac{-m}{z-z_0} - \frac{g'(z)}{g(z)}$ where the second term is an analytic function. So $Res(\frac{f'(z)}{f(z)}, z_0) = -m$

This leads to the following result: let $f$ be analytic on some open set containing a piecewise smooth simple closed curve $\gamma$ and analytic on the region bounded by the curve as well, except for a finite number of poles. Also suppose that $f$ has no zeros on the curve.

Then $\int_{\gamma} \frac{f'(z)}{f(z)} dz = 2 \pi i (\sum^k_{j =1} m_k - \sum^l_{j = 1}n_j )$ where $m_1, m_2...m_k$ are the orders of the $k$ zeros of $f$ inside of $\gamma$ and $n_1, n_2.., n_l$ are the orders of the poles of $f$ inside $\gamma$.

This follows directly from the theory of cuts:

Use of our result: let $f(z) = \frac{(z-i)^4(z+2i)^3}{z^2 (z+3i-4)}$ and let $\Gamma$ be a circle of radius 10 (large enough to enclose all poles and zeros of $f$. Then $\int_{\Gamma} \frac{f'(z)}{f(z)} dz = 2 \pi i (4 + 3 -2-1) = 8 \pi i$. Now if $\gamma$ is a circle $|z| = 3$ we see that $\gamma$ encloses the zeros at $i, -2i,$ and the pole at 0 but not the pole at $4-3i$ so $\int_{\Gamma} \frac{f'(z)}{f(z)} dz = 2 \pi i (4+3 -2) = 10 \pi i$

Now this NOT the main use of this result; the main use is to describe the argument principle and to get to Rouche’s Theorem which, in turn, can be used to deduce facts about the zeros of an analytic function.

Argument principle: our discussion about integrating $\frac{f'(z)}{f(z)}$ around a closed curve (assuming that the said curve runs through no zeros of $f$ and encloses a finite number of poles ) shows that, as we traverse the curve, the argument of the function changes by $2 \pi (\text{ no. of zeros - no. of poles})$ where the zeros and poles are counted with multiplicities.

Example: consider the function $f(z) = z^8 + z^2 + 1$. Let’s find how many zeros it has in the first quadrant.

If we consider the quarter circle of very large radius $R$ (that stays in the first quadrant and is large enough to enclose all first quadrant zeros) and note $f(Re^{it}) = R^8e^{i8t}(1+ \frac{1}{R^6}e^{-i6t} + \frac{1}{R^8 e^{i8t}})$ we see that the argument changes by about $8(\frac{\pi}{2} = 4 \pi$. The function has no roots along the positive real axis. Now setting $z = iy$ to run along the positive imaginary axis we get $f(iy) = y^8 -y^2 + 1$ which is positive for large $R$, has one relative minimum at $2^{\frac{-1}{3}}$ which yields a positive number, and is zero at $z = 0$. So the argument stays at $4 \pi$ so, we get $4 \pi = 2 \pi (\text{no. of roots in the first quadrant})$ which means that we have 2 roots in the first quadrant.

In fact, you can find an online calculator which estimates them here.

Now for Rouche’s Theorem
Here is Rouche’s Theorem: let $f, g$ be analytic on some piecewise smooth closed curve $C$ and on the region that $C$ encloses. Suppose that, on $C$ we have $|f(z) + g(z)| < |f(z)|$. Then $f, g$ have the same number of zeros inside $C$. Note: the inequality precludes $f$ from having a zero on $C$ and we can assume that $f, g$ have no common zeros, for if they do, we can “cancel them out” by, say, writing $f(z) = (z-z_0)^m f_1(z), g(z) = (z-z_0)^mg_1(z)$ at the common zeros. So now, on $C$ we have $|1 + \frac{g(z)}{f(z)}| < |1|$ which means that the values of the new function $\frac{g(z)}{f(z)}$ lie within the circle $|w+1| < 1$ in the domain space. This means that the argument of $\frac{g(z)}{f(z)}$ has to always lie between $\frac{\pi}{2}$ and $\frac{3 \pi }{2}$ This means that the argument cannot change by $2 \pi$ so, up to multiplicity, the number of zeros and poles of $\frac{g(z)}{f(z)}$ must be equal. But the poles come from the zeros of the denominator and the zeros come from the zeros of the numerator.

And note: once again, what happens on the boundary of a region (the region bounded by the closed curve) determines what happens INSIDE the curve.

Now let’s what we can do with this. Consider our $g(z) = z^8 + z^2 + 1$. Now $|z^8 -(z^8 + z^2 + 1)| =|z^2+1| < |z^8|$ for $R = \frac{3}{2}$ (and actually smaller). This means that $z^8$ and $z^8+z^2 + 1$ have the same number of roots inside the circle $|z| = \frac{3}{2}$: eight roots (counting multiplicity). Now note that $|z^8 +z^2 + 1 -1| = |z^8+z^2| < |1|$ for $|z| \leq \frac{2}{3}$ So $z^8 +z^2 + 1$ and $1$ have the same number of zeros inside the circle $|z| = \frac{2}{3}$ This means that all of the roots of $z^8+z^2 + 1$ lie in the annulus $\frac{2}{3} < z < \frac{3}{2}$

# Some Big Theorems

Now we’ve established the following: if $f$ is analytic on an oppen connected set $A$ and $z_0 \in A$, then $f$ has derivatives of all orders and $f(z) = \sum^{\infty}_{k=0} a_k (z-z_0)^k$ where $a_k = \frac{f^{(k)}(z_0)}{k!}$ and where $f^{(k)}(z_0) =\frac{k!}{2\pi i} \int_{C_r} \frac{f(w)}{(w-z_0)^{k+1}} dw$ where $C_r$ is a circle $|z-z_0|=r$ contained in $A$

This puts us in a position to prove some interesting theorems, concepts and results that we will use.

We start with Morera’s Theorem. Basically, this theorem says that if $f$ is continuous on some open connected set $A$ AND if $\int_{\gamma} f(z) dz = 0$ for all closed curves $\gamma$ of a specific type (our text uses triangles, but I’ve seen some texts use rectangles, or even rectangles with sides parallel to the real and imaginary axis), then $f$ is analytic in $A$. Note: the converse is false; example $f(z) = \frac{1}{z}$ is analytic on $C-\{0\}$ but if $\gamma$ is any piecewise simple closed curve enclosing the origin (with standard orientation) then $\int_{\gamma} \frac{1}{z} dz = 2 \pi i$

We will see that this is the prototypical thing that can go wrong.

Now, for a sketch of a theorem: let $z \in A$ and define $F(z) = \int^{z}_{z_0} f(w) dw$ where the integration is along the straight line path from $z_0$ to $z$.

Let $h$ be a small complex number. Then $F(z+h) =\int^{z+h}_{z_0} f(w)dw = \int^{z}_{z_0}f(w)dw + \int^{z+h}_z f(w)dw$ because the integral around a triangle is assumed to be zero. Hence $F(z+h) - F(z) = \int^{z+h}_z f(w)dw$

Now $|\frac{F(z+h)-F(z)}{h} - f(z)| = |\int^{z+h}_z \frac{f(w)-f(z)}{h} dw |$ (note: $f(z) = \int^{z+h}_z \frac{f(z)}{h} dw$)

Choose $z, w$ close enough so that $|f(w) -f(z)| < \epsilon$ ($f$ is continuous).
Then $|\int^{z+h}_z \frac{f(w)-f(z)}{h} dw | < \frac{\epsilon}{|h|}|h| = \epsilon$

It now follows that $F'(z) = f$ (that is, $f$ has a primitive on $A$) and because $F$ is analytic, so is $f$.

Theorem: Let $f$ be analytic on an open connected set $A$ and suppose there is some $z_0 \in A$ where $f^{(k)}(z_0) = 0$ for all $k \in \{0, 1, 2,... \}$. Then $f$ is identically zero on $A$.

Note: this is false for real variable functions.

Proof: a careful proof depends on some topology and more advanced properties, so we’ll prove it for an open disk and note that two points in $A$ can be connected by a finite number of overlapping disks and one can show that $f$ is identically zero on each of these disks.

Now for the single disk: here $f(z) = \sum^{\infty}_{k=0} a_k (z-z_0)^k = 0$ for all points on this disk, hence $f(z) = 0$ on this whole disk.

Concept: a zero of an analytic function and its order.

If $f$ is analytic and $f(z_0) = 0$ we say that $f$ has a zero at $z_0$. Because $f$ is analytic we see that $f(z) = \sum^{\infty}_{k=0} a_k (z-z_0)^k$. If all $a_k = 0$ then $f$ is the zero function. Otherwise if $a_0, a_1, ...a_{m-1}$ are all zero but $a_m \neq 0$ we say that $z_0$ is a zero of order m

Example: $f(z) = z^2 + 1$ has zeros of order 1 at $z = \pm i$; $sin(z)$ has zeros of order 1 at $z = \pm k \pi, k \in \{0, 1, 2,... \}$ $g(z) = (z+i)^3(z^2-9)$ has a zero of order 3 at $i$ and zeros of order 1 at $\pm 3$

Now let $f$ have a zero of order $m$ at $z_0$ Then
$f(z) = a_m (z-z_0)^m + a_{m+1} (z-z_0)^{m+1} + ... =(z-z_0)^m \sum^{\infty}_{k=0} a_{m+k}(z-z_0)^k$

Let $g(z) = \sum^{\infty}_{k=0} a_{m+k}(z-z_0)^k$; this is an analytic function that does NOT have a zero at $z_0$.

So $f(z) = (z-z_0)^m g(z)$ where $g(z_0) \neq 0$ and $g$ is analytic. This is an important concept.

Liouville’s Theorem Let $f$ be entire (analytic on the whole complex plane). If $f$ is bounded, then $f$ is the constant function.

Note: this is completely different from real variable functions. For example: $sin(x)$ has a valid power series expansion at $x = 0$ that is valid on the whole real line, and $|sin(x)| \leq 1$.

Proof of Liouville’s Theorem: I’ll present an alternate proof that uses Cauchy’s integral formula for derivatives. Let $f$ be entire and bounded; say $|f(z)| \leq M$ for all $z$. Now for any $z \neq 0$ we have $f'(z) = \frac{1}{2 \pi i} \int_{C_R} \frac{f(w)}{(w-z)^2} dw$ where $C_R$ is $|w| = R$ which is large enough to enclose $z$ Note: $R$ can be as large as desired.

Now $|f'(z)| = \frac{1}{2 \pi} |\int_{C_R} \frac{f(w)}{(w-z)^2} dw| \leq \frac{1}{2 \pi} \frac{M}{(R-|z|)^2} (2 \pi R)$ (the maximum modulus of the function being integrated times the arc length) so $|f'(z)| \leq \frac{MR}{(R-|z|)^2}$ for ALL $R > 0$. Let $R \rightarrow \infty$ and get that $|f'(z)| = 0$ for ANY $z$. Hence $f'(z) = 0$ for all $z$ and therefore $f$ is constant.

Now we present the book’s proof which does not use the Cauchy integral formula.

let $F(z) = a_0 + a_1 z + a_2 z^2 + a_3 z^3....$ and let $g(z) = a_1 + a_2 z + a_3 z^2 +..$ (think: $g(z) = \frac{F(z) - F(0)}{z}$ for $z \neq 0$ and $g(0) = a_1$ )

Note $|g(w)| \leq \frac{|F(z)-F(0)|}{|z|} \rightarrow |g(Re^{it})| \leq \frac{M+M}{R} = \frac{2M}{R}$

So $g(z) = \frac{1}{2 \pi i} \int_{C_R} \frac{g(w)}{w-z} dw$ and so $|g(z)| =\frac{1}{2\pi}|\int_{C_R} \frac{g(w)}{w-z} dw| \leq \frac{1}{2 \pi} \frac{2M}{R} \frac{1}{(|R-|z|)} 2 \pi R = \frac{2M}{R-|z|}$ which goes to zero as $R \rightarrow \infty$.

We now go to the Fundamental Theorem of Algebra.

Let $P(z)$ be any polynomial with complex coefficients. Then $P(z)$ has at least one root.

Proof: suppose $P$ has no roots, then $\frac{1}{P(z)}$ is an entire function. Note also: if $P(z) = a_0 + a_1z + a_2 z^2 + ...a_n z^n, |P(z)| \geq |a_n||z^n| - |a_{n-1}||z^{n-1}| - ..-|a_0|$. Now let $R$ be so large that $|a_n| R^n > |a_{n-1}||R^{n-1}| + ..+|a_0|$. Now let $M = min\{|P(z)|, z \in \{|z| < R \}$ then we have $\frac{1}{|P(z)|} \leq max \{M, \frac{1}{|a_n||R^n| - |a_{n-1}||R^{n-1}| - ..-|a_0|} \}$. Note the second term in the “max” goes to 0 as $R$ goes to infinity. This means that $\frac{1}{P(z)}$ is a bounded, entire function and therefore constant. That is only possible if $P$ is the constant polynomial. So ALL non-constant polynomials with complex coefficients have complex roots.

We can say a bit more: let $z_1$ be a root of $P$, say of order $m_1$. Then $P(z) = (z-z_1)^{m_1} g_1(z)$ where $g_1(z)$ is analytic and does NOT have a zero of $z_1$. Note that $g_1(z)$ is a polynomial too. So if $g_1(z)$ is not constant, we can find a root $z_2$ of order $m_2$ and proceed:

$P(z) = (z-z_1)^{m_1}(z-z_2)^{m_2}....(z-z_k)^{m_k} g_{k+1}$ where $g_{k+1}$ is a constant. By the division algorithm, we note that $m_1 +m_2 +...+m_k = deg(P)$

Again, this is FALSE for polynomials with real coefficients; $x^2+ 1$ does not factor over the reals.