# A simple remark on the spectrum of certain matrix sums

Having let the Shavgulidze-Thompson project slide out of the intray and into the mountain of Unfinished Loose Ends, I feel I should compensate with something vaguely mathematical. Hence this post, which is a follow up to some comments I left on a post at the Secret Blogging Seminar.

More precisely, in response to Q2 on that post, I left some rather dim-witted and error-strewn comments, only to have light shed by this subsequent observation from Greg Kuperberg:

**Proposition:** *Let A and B be two Hermitian matrices. Then the spectrum of A+iB lies in the rectangle formed by the first and last eigenvalues of A and B.*

Once GK stated the correct result, I realised that it followed from some facts that I really should have known – or knew, but had momentarily forgotten. It seems that the argument I had in mind is slightly different, at least in presentation, from the proof GK had in mind, and so I thought I’d give it here. (His reasoning seems like it should be more robust, and extend more easily to the case of bounded operators on infinite-dimensional Hilbert space.)

**Claim.** *Let and be normal matrices, with spectra and respectively. Then the spectrum of is contained in .*

**Proof.** Since is normal, there exists an orthonormal basis of , which consists of eigenvectors for . Let’s denote this basis by and let the corresponding eigenvalues be .

Similarly, there is an orthonormal basis and scalars such that for all .

Now let be an eigenvalue of , and let be a corresponding eigenvector of unit length. We have

But now we can exploit the fact that and each have a complete set of orthonormal eigenvectors. In particular, writing we have

We have (again, using the orthonormality of the ) and so . An exactly similar argument, this time using the , tells us that . Hence lies in the sum of these two convex hulls, as claimed.

## Cards on the table, or the man behind the curtain

I have to confess that the phrasing of the argument above wasn’t the first that came to mind when I read GK’s comment. Lurking in the background — above and, I suspect, in his approach also — is the concept of *numerical range.* The numerical range of an complex matrix is the set

and it is clear that is contained in for every pair of matrices. Now, by considering appropriate eigenvectors, one sees that every eigenvalue of is contained in . Also, if is a diagonal matrix, then the same calculation that was made above shows that is contained in the convex hull of , and since the numerical range is unchanged if we conjugate by a unitary matrix, it follows that for every *normal* matrix . In particular, if and are normal matrices then

which is in effect what we proved above.

The reason I should have remembered this is that a short while back I was reading up on some aspects of the numerical range for operators on infinite-dimensional spaces. The definition is the obvious one, and what is interesting is that we still have

- for every bounded operator ;
- for every normal operator .

Note that in infinite dimensions the spectrum of might contain points which are not eigenvalues, and so the argument above with eigenvectors doesn’t work anymore.

What about the case of pectrum of parallel sum of two matrices?

I’m not sure what you mean by “parallel sum”. Could you please give an example?