As in the previous post, we define a function by . Alternatively: for all and all . Our goal is now to prove the following inequality

**Proposition 1.** for all .

We can simplify/reduce the problem slightly using symmetry arguments. To maintain a more formal style than the previous post, let’s spell this out more explicitly.

**Lemma 2.** Let . Then for all .

*Proof of Proposition 1, using Lemma 2.* The desired inequality is trivial if either or is zero.

Let , and assume without loss of generality that .

Since is angle-preserving (and orientation-preserving) we may apply a rotation or reflection of the complex plane which maps to the real-line and to a point in the closed upper-half plane. We then let

and choose such that .

Then

which, by Lemma 2, is bounded above by

as required. **Q.E.D.**

The proof of Lemma 2 will be divided into two cases.

In this case ; this is evident from the geometry of this configuration, but also follows from squaring both sides and using . Therefore

Also, observe that

The last inequality is once again evident from a geometrical argument. To give a more algebraic/Cartesian argument: the two complex numbers and have the same imaginary part; their real parts are and respectively; and since .

Combining (3) and (4) we obtain

as required. **Q.E.D.**

Let , so that

Note that as a special case of the Cauchy–Schwarz inequality,

Applying (7) with , we obtain

Since and , . Combining this with (6) and (8), we conclude that

as required. **Q.E.D.**

- In comparison with the approaches taken in the previous blog post, note that in Case 2 we have improved the constant from to , but with a slightly more ad hoc argument that is less geometric (although we are able to sidestep any messy calculus with Cauchy–Schwarz).
- It is also worth noting that for much of the proof, the formulation in terms of complex numbers is merely for convenience, and that most of the arguments merely take place in with its usual norm — or, to be more abstract and less co-ordinate dependent, in 2-dimensional Euclidean space. The exception is one part of Case 1, namely the inequality (3), where we made use of the algebraic structure in and the multiplicative property of . Can we find an alternative approach, which only uses the Euclidean geometry of ?
- On a related note: the definition of the map can be extended to any normed space , as a radial square-root map: we define and for any non-zero vetor . If is Euclidean (i.e. its norm satisfies the parallelogram law) then, since 2-dimensional subspaces of any Euclidean real vector space are Euclidean, the previous remarks and Proposition 1 show that for all . What happens for more general norms?

**Lemma 1.** If and are non-negative real numbers, then .

The lemma is most easily/intuitively proved by noticing that (draw a picture!) and then multiplying both sides of this inequality by . The lemma also shows, without need for any calculus, that the square-root function on is Hölder continuous with exponent 1/2.

I was recently reminded of the inequality in Lemma 1 by some discussions with Matt Daws, who had pointed out to me that it gives an easy proof that the square-root function is continuous. (See also comments by Matt and others on this old MO question.) In some ongoing discussions with Matt and Jon Bannon, I found myself wanting a version of this for not-necessarily real-valued functions in .

Specifically, consider the function defined by

and then define by

Note that the restriction of to the positive cone agrees with the square root map already mentioned above.

A direct calculation shows that is norm-preserving, but since it is nonlinear this does not automatically ensure continuity; and it is continuity of which we wanted to know/check. Continuity of , or very similar maps, is presumably folklore, but having failed in some half-hearted attempts to find a reference for this result, I decided it would be easiest to try and come up with a proof with bare hands. Guided by the easy proof that is continuous, it is natural to try and prove some 2-dimensional version of the inequality at the start of this post, perhaps at the expense of worse constants. Specifically, we would like to know that the following result holds.

**Claim 2.** There exists a constant such that for all .

By the same argument used to deduce continuity of from Lemma 1, one sees that Claim 2 implies continuity of ; in fact, we get Hölder continuity with exponent .

In what follows, I will sketch one possible way to prove this claim, not necessarily with the best constant . It is meant as a tidied version of a train of thought, rather than an attempt to present the cleanest and most polished approach. (I should acknowledge the influence, in spirit if not the fine detail, of these old blogposts-before-weblogs-existed writings by Timothy Gowers, which were quite influential on me during my years as a PhD student and postdoctoral researcher.)

The first thing to notice is that preserves arguments of complex numbers; its effect is merely radial (hence the title of this blog post). This means that in trying to prove Claim 2, we are always free to rotate and by a fixed angle, so that either can be assumed real if this is convenient. Introducing the change of variables , and rotating to make real, we see that the claim is equivalent to

Next: what happens if we expand out both sides of the desired inequality (*)? The left hand side is — which you can also see by drawing a picture and using the cosine rule from school trigonometry — and the right hand side is then . At this point it looks unappealing to square both sides again and attempt to compare terms; this might work, but it looked messy when I tried it, so let’s take a step back and think again.

Recall that the case θ=0 is covered by Lemma 1. How or why does the proof break down for other values of θ? Well, for θ=0 the left hand side of (*) is and the right hand side is , and we won in this case because , so we can take . But for general θ we don’t have the same convenient factorization of the right-hand side.

What would be nice is if we had on the right hand side of (*). For this **does** factor as , and then we would be hoping to dominate by a multiple of . Of course is not the same as for general θ, but perhaps for small values of θ we can control the discrepancy with some crude bound, using calculus and the Mean-Value theorem if necessary?

Putting this thought on one side for the moment, let us think what to do when is far from 1. In fact, as a “stress-test”, what happens when ? (With hindsight we should have thought of this case sooner, since it corresponds to looking at Lemma 1 and wondering whether it applies to all , not just positive values.) In this case the left hand side of (*) is and the right hand side is , so clearly we cannot take any more; nevertheless, the AM-GM inequality shows that we could take in this case.

At this point we have two separate working arguments for θ=0 and θ=π, so how can we handle intermediate cases? We already had a brief look at what happens for small values of θ, so let’s look at what happens when θ is close to π. Drawing a picture of the appropriate obtuse-angled triangle, we realise that for the right-hand side of (*) to be small, both and must be small. In fact, as we let θ vary between π/2 and 3π/2, is minimized at the endpoints (i.e. when we have a right-angled triangle), and so

(The geometric intuition can be backed up by an appeal to the cosine rule; recall that on this interval, .)

What about the left hand side of (*)? Well, we may as well replace with its “worst-case scenario”, namely , and we already saw this is bounded above by . Applying Cauchy-Schwarz, we see that this in turn is bounded above by . So putting things together, we have established

**Lemma 3.** For , and any , we have .

Let’s turn back to the case of acute-angled triangles, i.e. , or equivalently the region where . Can we make good on the earlier hopes that

- is dominated by (a multiple of) ?
- is dominated by (a multiple of) ?

Recall that both of these statements do hold when θ=0. In fact, drawing a picture, we see that for any θ in this range; once again, the geometric intuition from drawing triangles can be backed up with explicit expansion of both sides using the cosine formula and the fact that . So we do indeed have , and it only remains to prove that the second of our two statements holds.

We seem to be doing well drawing pictures, so let’s do this. While we’re at it, let us rescale the second statement to reduce notational clutter, so that we are aiming to prove

(Clearly, if we can prove this then the 2nd statement above will follow.)

Now, in drawing pictures, it seems that we should distinguish between the cases and . Let’s look at the second case, and consider the triangle formed by the three points . Then is bounded above by the sum of the other two side lengths of the triangle, but it is clear from the picture that while . Hence we have proved that

(Actually, now that we have written this down, we can simplify the proof of (**) slightly to get rid of the arguments using arc length. Observe that and then observe that since , we have .)

But what if ? Well, in this case we can apply (**) with replaced by and θ replaced by -θ, to get

and then we have

as required. Hence we have established the inequality (**) with a constant 2 on the right-hand side, and therefore by rescaling back up again, we have proved the following lemma.

**Lemma 4.** For , and any , we have .

Combining Lemma 3 and Lemma 4, we have proved Claim 2, with a value of .

Looking back on this, it is clear that the arguments above could be written up in a more concise and more formal way, but this will be left for a shorter follow-up blog post, which might say more about the motivation for the original problem. However, at present I don’t see how to avoid the division into two cases (Lemma 3 and Lemma 4). It should be possible to improve the constant in Lemma 3, since on one side of the inequality we were using worst-case behaviour of the left hand side at θ=π while using worst-case behaviour of the right hand side at θ=π/2.

**Update:** after writing the bulk of this post, I learned from Matt Daws that he can get . His argument is more calculus-based and less explicitly geometric. Both Matt and I would be happy to hear of any explicit references for this inequality, either in the sharp form or even just for some explicit value of .

]]>And we’ll bask in the shadow

Of yesterday’s triumph

Sail on the steel breeze

Come on you boy child

You winner and loser

Come on you miner for truth and delusion

and shine

As time went on we saw less and less of Teddy and Vern until eventually they became just two more faces in the halls. That happens sometimes. Friends come in and out of your life like busboys in a restaurant.

you’re on the wire and can’t get back

how could you go and die

what a lonely thing to do

]]>dona eis sempiternam requiem

I never met Bourgain, and have never studied in any depth many of the areas of analysis where he broke new ground and had lasting impact, but several of his papers – not even his deepest or hardest work, just ones that happened to touch on areas of interest to me – have intrigued me with varying degrees of enlightement and bafflement. Here is a non-comprehensive selection (representing only my own interests) based on a few bookmarks and some reflection off the top of my head.

A counterexample to a complementation problem

Compositio Math. (1981)

New Banach space properties of the disc algebra and

Acta Math. (1984)

Translation invariant forms on

Annales Institut Fourier (1986)

On the similarity problem for polynomially bounded operators on Hilbert space

Isr. J. Math. (1986)

A problem of Douglas and Rudin on factorization

Pacific J. Math. (1986)

On the dichotomy problem for tensor algebras

Trans. Amer. Math. Soc. (1986)

Bounded orthogonal systems and the Λ(p)-set problem

Acta Math. (1989)

Sidonicity and variants of Kaczmarz’s problem (with M. Lewko)

Annales Institut Fourier (2017)

I expect that other bloggers who are more au fait with Bourgain’s work in harmonic analysis, PDE, and additive combinatorics will say more about his impact in those areas; and those who have met him and had deeper involvement with his work will be able to offer more fitting tributes. A start is the blog post of Terry Tao which I mentioned at the start

]]>Cette cathédrale en pierre

Traînez-la à travers bois

Jusqu’où vient fleurir la mer

Mais ne vous reveillez pas

- , in a very nice way
- , in a very nice way
- , in a nice way
- , in a faintly dodgy way

Conclusion: .

Just how nice or dodgy is our final isomorphism ?

]]>Given two connected Lie groups and , when are their Fourier algebras and isomorphic (as topological algebras)?

Generally speaking, there is no universal algorithm for deciding if two commutative Banach algebras (CBAs) are isomorphic in the sense above. However, there are various standard tools one can try to use.

- Are they both unital / non-unital?
- Are they both Jacobson semisimple?
- Do they have homeomorphic maximal ideal spaces? Shilov boundaries?
- Are they both Arens regular?
- Can they be distinguished by cohomological invariants? In particular: are they both (non-)amenable? weakly amenable?

One additional test that is sometimes overlooked is:

- are the underlying topological vector spaces of the two CBAs isomorphic?

To use slightly more common phrasing: do the two CBAs have “the same” underlying Banach space?

The aim of this belated sequel is to present a few simple and instructive examples where we can easily distinguish two given CBAs, and then to show how the results mentioned in the previous post allow us to distinguish two Fourier algebras when the other simple tests seem inadequate.

As before, I have not tried to make the arguments here self-contained, but hopefully those who are interested can easily look up the relevant terminology and definitions.

Up to isomorphism, there are exactly two unital, commutative, 2-dimensional -algebras, corresponding to

The first algebra is semisimple but the second is not; so the two algebras cannot be isomorphic.

Consider the following function algebras on the closed unit disc: , the algebra of all continuous complex-valued functions on ; and , the subalgebra of all which are analytic on the open unit disc. We equip both of these with the usual supremum norm. Both are unital, semisimple, Arens regular Banach algebras, and both have maximal ideal space . However, the Shilov boundary of is the unit circle, while that of is the whole of the closed disc. So these Banach algebras cannot be isomorphic.

Take , as in Example 2, but now consider the subalgebra , which consists of all whose Taylor series (centred at ) converge absolutely on the closed unit disc. In other words, such are of the form where . We equip with the obvious -type norm. Both of these CBAs are unital and semisimple, and both have the same maximal ideal space and Shilov boundary. However there are several ways to show that they are not isomorphic:

- is Arens regular, while is not;
- the underlying Banach spaces of and are not isomorphic (for instance, the latter space has the Schur property while the former one does not);
- the automorphism group of is (with usual action on the unit disc via Möbius transformations) while the automorphism group of is just acting by rotations.

Take

Now consider two groups and . The Fourier algebras and share the following properties:

- both non-unital (and both have bounded approximate identities);
- both Jacobson-semisimple;
- both have maximal ideal spaces homeomorphic to , with the Shilov boundary being the whole maximal ideal space in both cases;
- both Arens irregular;
- both fail to be weakly amenable.

I do not know if they can be distinguished by their automorphism groups (recall that we are not assuming automorphisms are isometric). However, we do know that and are not isomorphic as Banach spaces (and so in particular they cannot be isomorphic as topological algebras).

Why is this? Well, it is known (I think due to Khalil, but possibly also worked out by Gelfand’s school) that is isomorphic as a Banach space to , where denotes the trace-class operators on a Hilbert space .

We also know that if and are separable infnite-dimensional Hilbert spaces, then and at the level of Banach spaces.

Now, by using some abstract operator-algebra/operator-space techniques, one can bootstrap this to show that is Banach-space-isomorphic to while is Banach-space isomorphic to . And, as observed in the previous post, these two Banach spaces are not isomorphic.

Can we prove that and are not isomorphic as Banach algebras?

Note that both these Fourier algebras have underlying Banach space isomorphic to so that the previous argument does not apply. Moreover, both algebras share the same five properties listed in Example 4.

It is my feeling (backed up by some incomplete private calculations) that we can distinguish these two algebras by looking at the space of alternating -cocycles. To use some old terminology introduced by B. E. Johnson: it seems that the second algebra is -dimensionally weakly amenable, while the first one isn’t. However, to my knowledge this has not been worked out explicitly in the literature.

]]>Theorem 1 below is something I noticed in 2016, but whose proof I forgot to write down at the time. Having just spent a half-hour trying to (re)construct a proof, it seems worth quickly writing down an argument here so that I can find it more easily. Theorems 2 and 3 are then natural things to point out, to indicate the context for Theorem 1; in both cases I’ve tried to piece together proofs from various bits of the literature.

Let **T** denote the space of trace-class operators on a separable infinite-dimensional Hilbert space H. Let **V** = L_{1}([0,1], **T**) be the space of Bochner-integrable **T**-valued functions on [0,1]; alternatively we could define **V** to be the projective tensor product of **T** with L_{1}.

**Theorem 1.** **T** and **V** are not isomorphic as Banach spaces.

**Theorem 2.** The dual spaces **T**^{*} and **V**^{*} are not isometrically isomorphic as Banach spaces.

**Theorem 3.** The dual spaces **T**^{*} and **V**^{*} are isomorphic as Banach spaces.

It is known that the Banach space **T** has the *Radon-Nikodym Property* (RNP). I will not define the RNP here, but all we need to know is that it passes to closed subspaces, and that L_{1} does not have the RNP. Since **V** contains a (complemented) closed subspace isomorphic to L_{1}, it follows that **V** does not have the RNP.

**Question:** Is there a simpler proof of Theorem 1? Invoking the RNP feels like overkill.

Observe that **T**^{*}=B(H) and **V**^{*}= L_{∞}([0,1], B(H)); we denote this second von Neumann algebra by **N** for sake of brevity. Suppose that B(H) is isometrically isomorphic (as a Banach space) to **N**. By a theorem of Kadison

R. V. Kadison,

Isometries of operator algebras.Annals of Math. 54 (1951), no. 2, 325&ndas;338

this would imply that there is a Jordan *-isomorphism φ from B(H) onto **N**. Because B(H) is a factor, Corollary 11 of Kadison’s paper implies that φ must either be a *-isomorphism or a *-anti-isomorphism. But this is impossible since N has non-trivial centre, while B(H) has trivial centre.

**Question:** Can we obtain a more direct proof by investigating Kadison’s arguments and specializing them to the case of B(H)?

This can be deduced from a more general result of Robertson and Wassermann:

A. G. Robertson, S. Wassermann,

Completely bounded isomorphisms of injective operator systems.Bull. London Math. Soc. 21 (1989), 285–290.

However, it seems better to sketch a simpler argument for this particular case, which admittedly uses some of the same ideas, specifically, some form of Pelczynski’s decomposition method.

Observe that **T**^{*}=B(H) and **V**^{*}= L_{∞}([0,1], B(H)). It is easy to construct an isomorphism of Banach spaces between L_{∞}[0,1] and **C** ⊕ L_{∞}[0,1]; a minor variant of this gives an isomorphism of Banach spaces between **V**^{*} and **T**^{*}⊕ **V**^{*}.

Similarly, there is an obvious isomorphism of Banach spaces between L_{∞}[0,1] and L_{∞}[0,1]⊕L_{∞}[0,1], and a minor variation of this gives an isomorphism of Banach spaces between **V**^{*} and **V**^{*}⊕ **V**^{*}.

The final ingredient in this proof is the observation that **T**^{*} is isomorphic as a Banach space to **F**⊕ **B**^{*}. To justify this, note that there is a projection from B(L_{2} ⊗ H) onto L_{∞}([0,1],B(H)) = **V**^{*}, and that the former space is isomoprhic to B(H) since L_{2}⊗ H is isomorphic to H.

Putting things together:

T^{*}≅F⊕V^{*}≅F⊕ (V^{*}⊕V^{*}) ≅(F⊕V^{*}) ⊕V^{*}≅T^{*}⊕V^{*}≅V^{*}

as required.

]]>Let them have what was under the water. What lived in Venice was still afloat.

—from Venice Drowned by Kim Stanley Robinson —

]]>
I’m going to watch the bluebirds fly over my shoulder

I’m going to watch them pass me by, maybe when I’m older

]]>every year is getting shorter, never seem to find the time;

plans that either come to naught, or half a page of scribbled lines