Having let the Shavgulidze-Thompson project slide out of the intray and into the mountain of Unfinished Loose Ends, I feel I should compensate with something vaguely mathematical. Hence this post, which is a follow up to some comments I left on a post at the Secret Blogging Seminar.

More precisely, in response to Q2 on that post, I left some rather dim-witted and error-strewn comments, only to have light shed by this subsequent observation from Greg Kuperberg:

Proposition: Let A and B be two Hermitian matrices. Then the spectrum of A+iB lies in the rectangle formed by the first and last eigenvalues of A and B.

Once GK stated the correct result, I realised that it followed from some facts that I really should have known – or knew, but had momentarily forgotten. It seems that the argument I had in mind is slightly different, at least in presentation, from the proof GK had in mind, and so I thought I’d give it here. (His reasoning seems like it should be more robust, and extend more easily to the case of bounded operators on infinite-dimensional Hilbert space.)

Claim. Let $A$ and $B$ be normal matrices, with spectra $\sigma(A)$ and $\sigma(B)$ respectively. Then the spectrum of $A+B$ is contained in ${\rm co}\, \sigma(A)+{\rm co}\,\sigma(B)$.

Proof. Since $A$ is normal, there exists an orthonormal basis of ${\bf C}^n$, which consists of eigenvectors for $A$. Let’s denote this basis by $v_1,\dots,v_n$ and let the corresponding eigenvalues be $\lambda_1,\dots,\lambda_n$.

Similarly, there is an orthonormal basis $w_1,\dots, w_n$ and scalars $\mu_1,\dots,\mu_n$ such that $Bw_k=\mu_kw_k$ for all $j$.

Now let $\alpha$ be an eigenvalue of $A+B$, and let $x$ be a corresponding eigenvector of unit length. We have $\alpha = \langle \alpha x , x\rangle = \langle (A+B)x,x\rangle = \langle Ax,x\rangle + \langle Bx, x\rangle$

But now we can exploit the fact that $A$ and $B$ each have a complete set of orthonormal eigenvectors. In particular, writing $x = \sum_j \langle x, v_j \rangle v_j$ we have $\langle Ax , x \rangle = \sum_j \lambda_j \vert \langle x, v_j \rangle \vert^2$

We have $\sum_j \vert\langle x, v_j\rangle\vert^2=1$ (again, using the orthonormality of the $v_j$) and so $\langle Ax,x\rangle \in {\rm co}\, \{ \lambda_1,\dots,\lambda_n\} = {\rm co}\sigma(A)$. An exactly similar argument, this time using the $w_k$, tells us that $\langle Bx, x\rangle \in {\rm co}\,\sigma(B)$. Hence $\alpha=\langle Ax,x\rangle+ \langle Bx,x\rangle$ lies in the sum of these two convex hulls, as claimed.

Cards on the table, or the man behind the curtain

I have to confess that the phrasing of the argument above wasn’t the first that came to mind when I read GK’s comment. Lurking in the background — above and, I suspect, in his approach also — is the concept of numerical range. The numerical range of an $n\times n$ complex matrix $M$ is the set $W(A)=\{ \langle Mx ,x \rangle | x \in {\bf C}^n, \|x\|_2 \leq 1 \}$

and it is clear that $W(A+B)$ is contained in $W(A)+W(B)$ for every pair $A, B$ of $n\times n$ matrices. Now, by considering appropriate eigenvectors, one sees that every eigenvalue of $M$ is contained in $W(M)$. Also, if $D$ is a diagonal matrix, then the same calculation that was made above shows that $W(D)$ is contained in the convex hull of $\sigma(D)$, and since the numerical range is unchanged if we conjugate by a unitary matrix, it follows that $W(M)\subseteq {\rm co}\, \sigma(M)$ for every normal matrix $M$. In particular, if $A$ and $B$ are normal $n\times n$ matrices then $\sigma(A+B) \subseteq W(A+B) \subseteq W(A)+W(B) = {\rm co}\,\sigma(A)+{\rm co}\,\sigma(B)$

which is in effect what we proved above.

The reason I should have remembered this is that a short while back I was reading up on some aspects of the numerical range for operators on infinite-dimensional spaces. The definition is the obvious one, and what is interesting is that we still have

1. $\sigma(T)\subseteq W(T)$ for every bounded operator $T$;
2. $W(M) = {\rm co}\,\sigma(M)$ for every normal operator $M$.

Note that in infinite dimensions the spectrum of $T$ might contain points which are not eigenvalues, and so the argument above with eigenvectors doesn’t work anymore.

1. • 