BI: Guessing geodesics in hyperbolic spaces

In this post I’ll discuss a cool lemma due to Brian Bowditch (from this paper) which is very useful when you want to show that a space is hyperbolic.

The most direct way of showing that X is hyperbolic is trying to figure out how geodesics in X look like, show that you guessed the geodesics correctly and then show that triangles are thin, right? Well, Bowditch tells you that you can skip the second step…

This might not seem much, but showing that a given path is (close to) a geodesic might be quite annoying… and instead  you get this for free!

More formally, the statement is the following (in this form it’s due to Ursula  Hamenstädt, see Proposition 3.5 here). We denote the D-neighborhood by N_D.

Guessing Geodesics Lemma: Suppose that we’re given, for each pair of points x,y in the geodesic metric space X, a path \eta(x,y) connecting them so that, for each x,y,z,

\eta(x,y)\subseteq N_D(\eta(x,z)\cup \eta(z,y)),

for some constant D. Then X is K-hyperbolic and each \eta(x,y) is K-Hausdorff-close to a geodesic, where K=K(D).

(The statement is almost true: You also have to require two “coherence conditions”, i.e. that diam(\eta(x,y))\leq D if d(x,y)\leq 1 and that for each x',y'\in \eta(x,y) the Hausdorff distance between \eta(x',y') and the subpath of \eta(x,y) between x' and y' is at most D.)

This trick has been used many times in the literature: Bowditch used it to show that curve complexes are hyperbolic, Hamenstädt to study convex-cocompact subgroups of mapping class groups, Druţu and Sapir to show that asymptotic tree-graded is the same as relatively hyperbolic, Ilya Kapovich and Rafi to show that the free-factor complex is hyperbolic, etc.

The proof is quite neat. First, one shows a weaker estimate, i.e. that if \beta is any path connecting, say, x to y, then the distance from any p\in\eta(x,y) from \beta is bounded, roughly, by \log(length(\beta)).

To show this, you just split \beta in two halves of equal length, draw the corresponding \eta‘s and notice that p is close to one of them.

Then repeat the procedure logarithmically many times or, more formally, use induction (starting from diam(x,y)\leq D when d(x,y)\leq 1).

Ok, now that we have the weaker estimate (for any rectifiable path) we have to improve it (for geodesics).
Let now \beta be a geodesic from x to y, let p\in\eta(x,y) be the furthest point from \beta, and set d(p,\beta)=\xi. Pick x',y'\in\eta(x,y) before and after p at distance 2\xi from p (let’s just ignore the case when d(p,x)<2\xi or d(p,y)<2\xi, I hope you trust it’s not hard to handle).

As p is the worst point, we have d(x',\beta),d(y',\beta)\leq \xi, so that we can draw a red path of length at most 8\xi like so:

And now the magic happens. In view of the weak estimate for the red path \alpha, we get

\xi= d(p,\alpha)\leq O(\log(\xi)),

which gives a bound on \xi. (You might have noticed that here we need the second “coherence condition”). That’s it!

Exercise: complete the proof showing that each q\in \beta is close to \eta(x,y).

A relatively hyperbolic version of this trick will be available soon… 🙂

Posted in guessing geodesics | 2 Comments

BI: Teichmüller space, part I

It is a remarkable fact that the group of orientation-preserving isometries of \mathbb{H}^2 can be identified with the group of bi-holomorphisms of the unit disk \mathbb{D} in \mathbb{C}. [Fun fact: Poincaré said that he discovered (basically) this while stepping on a bus.]

As a consequence, for S an orientable closed surface of genus at least 2 fixed from now on, we have a natural bijection

{complex structures on S } \leftrightarrow {hyperbolic metrics on S}

The bijection is given by the fact that a complex structure on S is a way of seeing S as a quotient of \mathbb{D} by a group of biholomophisms, and similarly for hyperbolic metrics.
There’s a more explicit way of comparing a complex structure with its corresponding  hyperbolic metric. Look at the tangent space T_p(S) at some point p of S. The hyperbolic metric (as well as any other Riemannian metric) naturally gives  a collection of circles covering T_p(S). The complex structure does as well (consider multiplication by e^{i\theta} for \theta\in [0,2\pi]). As it turns out, a complex structure and its corresponding hyperbolic metric give the same collection of circles in every tangent space.
(In other words, they induce the same conformal structure.)

Teichmüller space Teich(S) is a space that parametrizes the complex structures/hyperbolic metrics on S. It does not parameterize them up to biholorphism/isometry, but up to biholorphisms/isometries isotopic to the identity. The formal definition is below the informal discussion in the next paragraph.

Suppose you have a homeomorphism f of S supported in a disk and a hyperbolic metric g on S. Then one can take the push-forward f_*(g) of g through f and get another hyperbolic metric on S. Teichmüller space does not distinguish between these metrics (if it did it would be huge!).
On the other hand, suppose you have a hyperbolic metric g on S with some non-trivial isometry f. Then g and f_*(g) actually give distinct points of Teich(S).

And, now, if you really want to see it, here is the formal definition.

Teich(S)=\{(X,f)| X \mathrm{\ hyperbolic\ surface}, f:S\to X \mathrm{\ homeomorphism} \} /_\sim,

where “hyperbolic surface” just means surface endowed with a hyperbolic metric and the equivalence relation \sim given by (X_1,f_1)\sim (X_2,f_2) if and only if there exists an isometry \iota: X_1\to X_2 so that the diagram

commutes up to isotopy, i.e. f_2^{-1}\iota\circ f_1:S\to S is isotopic to the identity. (One can safely substitute isotopies with homotopy equivalences.)

There is a topology on Teich(S), but I will not discuss it. Here is probably the most important theorem about Teich(S).

Theorem:  Teich(S) if homeomorphic to \mathbb{R}^{6g-6}, where g is the genus of S.

More precisely, there is a system of coordinates for Teich(S), the so-called Fenchel-Nielsen coordinates. Danny Calegari has a post on this which contains way more details than the discussion below.

Digression: The very nice structure of Teich(S) given by this theorem explains why it is reasonable to consider hyperbolic metrics up to isometry isotopic to the identity, rather than up to isometry, at least as a first step. Then, if one is interested in hyperbolic metrics up to isometry, one can then take the quotient of Teich(S) by the natural action of the mapping class group MCG(S), and hence form the so-called moduli space of S. This action is very nice as well, e.g. it is properly discontinuous, so that moduli space has the structure of an orbifold.

Back to the theorem. The proof is quite interesting, and is based on a pants decomposition of S, that is to say describing S as a union of pairs of pants, i.e. the following object:

Any maximal collection of disjoint simple closed curves on S gives a pants decomposition, here is an example:

One can endow a pair of paints with a hyperbolic metric so that the boundary is a geodesic in the following way. In \mathbb{H}^2 there exist right-angled hexagons, and one can also freely choose the lengths a,b,c of three sides as in the picture.

Gluing two copies of such a hexagon along the blue sides yields a hyperbolic metric on a pair of pants. What is more, one can glue pairs of pants and still obtain a hyperbolic metric as long as corresponding boundary components have the same length. So, given a pair of pants decomposition of S, one can assign a length to all the curves appearing in it, consider the corresponding metrics on pairs of pants and glue them all together to get a hyperbolic metric on S.

Any maximal collection of disjoint simple closed curves in S contains 3g-3 curves, not 6g-6, which means that there are other parameters to consider.
The hyperbolic metrics we constructed on pairs of pants have the feature that for any given pair of boundary components  there is a unique geodesic connecting them and orthogonal to both of them, namely the suitable blue side of one of the hexagons. Hence you see that gluing two pairs of pants in this way (the blue lines represent the geodesics we just described, in adjacent pairs of pants):

or this way:

gives different metrics. So, in order to specify a metric on S we also have to assign a “twist parameter” to each simple closed curve in the decomposition. You might think that these parameters are defined modulo 2\pi, but it is not the case: If we change one of the twist parameters by 2\pi we obtain a new hyperbolic metric which is indeed isometric to the first one, but there’s no isometry isotopic to the identity between them.

To sum up, we choose a maximal collection of disjoint simple closed curves in S, and in order to specify a hyperbolic metric on S we assign to each of them a “length parameter” and a “twist parameter”. Some routine and some not-so-routine checks and the theorem is proven.

In another post I’ll talk about the tangent space of Teich(S) and the metrics one can put on Teich(S) using it.

Posted in Teichmuller space | Leave a comment

Just for fun: trefoil knot complement cake

Based on a joint work with Rob Clancy, presented at the Oxford Mathematical Cake Seminar. You might think that this is just a regular cake.

But it’s not! The icing is in the shape of a trefoil knot, so that the cake itself has the shape of a trefoil knot complement (in S^3). Here, take a look at the layers before they were put on top of each other:

Random facts about the cake:

-its fundamental groups is the braid group on 3 strands and has presentations \langle x,y | xyx=yxy\rangle=\langle x,y | x^2=y^3\rangle,
-it contains approximately 500 grams of butter, 500 grams of sugar and 7 eggs,
-it admits finite volume metrics with universal covers \mathbb{H}^2\times \mathbb{R} and \widetilde{SL_2(\mathbb{R})},
-the icing was very buttery,
-it has a Seifert fibration with base orbifold a once-punctured sphere with two cone-points,
-it took an afternoon minus a (spectacular!) rugby match to bake it.

Credits:

Designer: Rob
Manager: Rob
Cook: Rob
Sculptor: Rob
Icing artist: Rob
Dish washing: Alex

Posted in Just for fun | 1 Comment

EO: embedding graph manifold groups in products of trees

This post is based on this paper with David Hume.
As discussed here, it is a nice property for a group to be quasi-isometrically embeddable in a product of finitely many trees. Graph manifolds are certain 3-manifolds, and if you don’t know what they are just bear in mind the following example: Take two surfaces with boundary, take the product of each with S^1 and glue the resulting manifolds along the toric boundary switching the factors.

These manifolds are interesting, to me at least, because if you want to prove a property of  all 3-manifold groups invoking geometrization, graph manifolds are one of the cases you have to consider. They are somewhat less studied than the other cases so you can have a lot of fun with them.

Anyway, we showed:

Theorem: If M is a graph manifold, then there exists a quasi-isometric embedding of \pi_1(M) into a product of 3 trees.

And yes,  this can be used to study embeddability of 3-manifold groups in products of trees, as you’ll see (hopefully) next week on arxiv, joint with John MacKay 🙂 [EDIT: here it is]

3 is optimal for closed graph manifolds, and at first we suspected that it was the case for non-closed ones as well. But this is probably not the case, see below.

So, let’s construct a couple of trees given a graph-manifold M, for simplicity say the one depicted above. Start off with a bi-coloring of the Bass-Serre tree of M.

What you see above each vertex is a product of the universal cover of a compact surface with boundary and \mathbb{R}. The universal cover of the surface is a fattened tree, i.e. it looks like this:


Ok, so above each vertex we have, essentially, a product of a tree and \mathbb{R}. Now, we want to put together all the trees sitting above the red vertices of the Bass-Serre tree.
Take red vertices at distance 2. If you think about it, you see that the corresponding trees T_1,T_2 contain geodesics that can naturally be regarded as parallel. Let me try and help you to visualize this. Consider the “intermediate” vertex. Above it you can see a strip, i.e. an interval times \mathbb{R}, connecting the boundary components corresponding to the red vertices we are aconsidering. The boundary components of such strip naturally correspond to geodesics in T_1,T_2.

So, you can construct a tree T_{red} starting with a disjoint union of all trees lying above red vertices, and identifying corresponding geodesics. And of course, you can also construct similarly T_{blue}.

There is a natural coarsely lipshictz map \pi_1(M)\to T_{red} \times T_{blue}. The reason why this map might fail to be a quasi-isometric embedding is because it might fail to “see” some long geodesics that spend very little time above each vertex.
However, one can keep track of such geodesics in the Bass-Serre tree, and this actually is the third tree involved in the product.

I suspect that, at least for certain non-closed graph-manifolds, there is no need to “stabilize” taking the product with the Bass-Serre tree. One needs to arrange things in such a way that whenever a geodesic passes above a vertex, this gets recorded in either T_{red} or T_{blue}, or a similar weaker property.

Posted in graph manifolds in products of trees | Leave a comment

EO: contracting elements

In this post I’ll talk about this paper. There are many groups that have been known for quite a while to “look like” hyperbolic groups, but only in certain “directions”. Examples of such directions are (the axes of ) rank 1 isometries in CAT(0) groups, pseudo-Anosovs in mapping class groups (of closed surfaces), elements acting hyperbolically in groups acting acylindrically on a simplicial tree, iwip (aka fully irreducible) elements of Out(F_n).
I left out the example I mostly had in mind when writing the paper: hyperbolic elements (of infinite order) in relatively hyperbolic groups. Those are the infinite order elements of a relatively hyperbolic group not conjugated into any peripheral subgroup.
They generate undistorted subgroups that have little in common with the peripheral subgroups, something like this:

The picture represents a tree-graded space approximating a portion of a relatively hyperbolic group, with the red line representing the orbit of the hyperbolic element while the flats represent left cosets of peripheral subgroups.
Hyperbolic elements, as well as all the other examples mentioned above, are known to be Morse, which means the following. First, their orbits are quasi-geodesics, and second whenever you have a blue quasi-geodesic joining points on a red orbit, the quasi-geodesic stays close to the orbit:

Indeed, a stronger property holds. As it turns out, a closest point projection \pi on the orbit of a hyperbolic element has the property that

(*) if d(\pi(x),\pi(y)) is large enough then all geodesics from x to y pass close to \pi(x) and \pi(y).

Exercise: this is indeed a stronger property. Hint: show that a path connecting x to y and  staying far from the orbit is very long compared to a geodesic connecting \pi(x) to \pi(y).
So, the natural question is: do we have this property, say, for mapping class groups and pseudo-Anosovs or fundamental groups of graph manifolds and elements acting hyperbolically on its Bass-Serre tree, given a word metric? I don’t know… I wildly guess that it’s not true in general, but nonetheless a version of (*) holds. The idea is that there’s nothing special about a given word metric, so there’s nothing special about geodesics in a given word metric. We might as well consider a family of quasi-geodesics (with uniformly bounded constants) so that every pair of points can be joined by a quasi-geodesic in the family. The advantage of doing this is that in certain cases you have a family of special quasi-geodesics with nice description and properties that are easier to handle. In the mapping class group case one can use hierarchy paths, while in the graph manifold case one can use the paths described here. (Probably in the mapping class group case one can also use splitting sequences in the train tracks complex, as defined in this paper, but I’m not very familiar with them…)
The map \pi is no longer defined as the closest point projection, one just says that the set A\subseteq X (for example, the orbit of an element) is contracting if there exists a map \pi:X\to A restricting (coarsely) to the identity on A so that (*) holds when the word “geodesics” is replaced by “special quasi-geodesics”.
This turns out to work in most of the cases mentioned at the very beginning of the post, meaning that, in all those examples of pairs group/family of elements, the elements in the family have quasi-geodesic and contracting orbits (Out(F_n) and groups acting acylindrically need to be treated in a slightly different way).

Here is what I used this for:

Thereom: If a finitely generated subgroup contains a contracting element and is not virtually cyclic, then a (simple) random walk on the subgroup ends up in a non-contracting element with probability decaying exponentially in the length of the random walk.

In other terms, if you write down a long random word in the generators of the subgroup, the probability that this word represents a non-contracting element is small.
Special cases of this theorem were known already, most notably it was known for mapping class groups.

In order to show the theorem one has to use just one contracting element to produce many others. The lemma that allows to do this states that, loosely, if g is a contracting element and h is “generic” then hg^n is contracting for a suitable n. By “generic” I mean the following. It turns out that every contracting element is contained in a maximal virtually cyclic subgroup E(g) so that whenever h is not in E(g) then the projection of h \langle g\rangle on \langle g \rangle has uniformly bounded diameter . Generic means not in E(g).

So, in the setting of the lemma, we have the following picture:

All the drawn translates of an orbit of g have very little in common, by genericity, so it should not be a surprise that the red line is a quasi-geodesic. The map \pi that witnesses the fact that h g^n is contracting can be defined looking at the projections of a given x on the translates of \langle g\rangle.

Given this way of constructing contracting elements, it’s not very hard to prove the theorem. In a sufficiently long random word in the generating set for the subgroup you expect to see any given word, and in particular a word representing a contracting element g. You might as well assume that this appears as a final subword, up to cycli conjugation. A subgroup as in the theorem turns out to contain many free groups, so there’s “a lot of space” and no reason at all for a random walk to stay close to a given virtually cyclic sub-subgroup. A similar argument applied to the random sub-walk before the final subword g gives that we are in the setting of the lemma described above.
One has to be a little bit more careful with the estimates (and possibly repeat this argument) but there really is not much more than this in the formal proof.

Posted in contracting elements | Leave a comment

Why not donating your paper to the public domain?

Breaking news: David Hume and I just managed to make sure that our paper will be in the public domain when published (by Proceedings of the AMS).
We have been inspired by Saul Schleimer, who has been consistently dedicating his papers to the public domain for quite a while.
And it hasn’t been that difficult! All it took was amending the consent to publish form and replying to an email saying that yes, we really want this.
Apparently “this issue is coming up more frequently”, which is very good.

The reasons why I (we) wanted to do this is explained in the brand new page Science is too valuable not be free.

Posted in Why not donating your paper to the public domain? | Leave a comment

BI: nonstandard analysis, a small investment

First of all, you can read about nonstandard analysis in a slightly more famous blog than my own, for example you can check out this post. I guess I should explain the quote “a small investment” by Isaac Goldbring (afaik). Using nonstandard analysis is never strictly necessary. However, it provides such a clearer view on a broad range of constructions that it is a valuable tool, or at the very least a valuable aid to intuition. One of the advantages of nonstandard analysis is that it gives a way of formalizing intuitive concepts involving infinities that are otherwise rather clumsy to define formally. A very nice example I’ve recently learned about is given by ends of groups (and other metric spaces). Those are “connected components at infinity”, and with nonstandard analysis you can literally define them this way, while standard definitions are more obscure, in my opinion at least. And then my favourite: asymptotic cones… I always think of asymptotic cones from the nonstandard viewpoint, while most often I (painfully) translate my thoughts into the language of ultrafilters&co when writing a paper.
The adjective “small” is referred to the FACT that it really doesn’t take much to learn the basics of nonstandard analysis, and so here we go!

Let’s start with \mathbb{R}. The main idea is to extend it to some object ^*\mathbb{R}\supseteq \mathbb{R}, its nonstandard extension, that contains infinitesimal and infinite quantities. Admit it: your intuition on limits is at least partly based on them!
^*\mathbb{R} should also look as similar as possible to \mathbb{R}. More in general, one might look for an object ^*X associated to, say, a metric space X that encodes the behaviour of X at very large and very small scales. As it turns out, you can construct ^*X\supseteq X starting from any set X. Also, you can take nonstandard extensions of functions f:X\to Y\rightsquigarrow \,^*f:\,^*X\to\,^*Y and relations (orders, etc.), so that whenever you have a structure on a set X (group, metric space,…) you get something for ^*X, we’ll see exactly what. I’ll call standard world (a large enough bit of) the usual universe of set theory and nonstandard world the union of all nonstandard extensions of stuff in the standard world. As it turns out, standard and nonstandard world are spectacularly similar.

Slogan: You can’t tell apart the standard world from the nonstandard world just living inside them.

(The slogan translates into model theoretic language, modulo technicalities, as “the natural map from the standard world to the nonstandard world is an elementary embedding”, so in particular “X and ^*X satisfy the same first order properties”.) In order to explain the slogan, we have to introduce the concept of internal set. Bear with me, this is the non-intuitive part of the business (“small investment”, not “gift”). Internal sets are the subsets of ^*X that you can see when living inside it. Formally, those are the sets in ^*\mathcal{P}(X), but that’s not very enlightening. More interestingly:
1) nonstandard extensions of subsets of X are internal subsets of ^*X,
2) finite sets are internal,
3) you can show that a set is internal if you show that it satisfies a property defined in terms of other internal sets.
There are also have internal functions and relations, that satisfy similar properties.

Examples time. Intervals in ^*\mathbb{R}, say closed intervals, are internal. (The nonstandard extension of the total order of \mathbb{R} is an total order on ^*\mathbb{R} as we will see in a minute, so “closed interval” makes sense. Also, it is the nonstandard extension of something, so it is internal.) Well, a closed interval is defined in terms of the order structure of ^*\mathbb{R} by the property “the set of all elements lying between the endpoints”. Other examples of internal sets are given by level sets of internal functions… Those are defined in terms of other internal objects, aren’t they?

The reason why internal sets (and functions and relations) are interesting is the following theorem by Łoś:

Transfer principle: Internal sets, functions and relations satisfy the same formulae that are satisfied in the standard world.

Here are some examples to clarify this. Let’s start with the simplest type of internal guys, the nonstandard extensions of guys in the standard world. A group structure on a set is an operation satisfying  certain properties. Now, if you take the nonstandard extension of the group operation, by the transfer principle it will also satisfy those properties. So, the nonstandard extension of a group is a group. Similarly, the nonstandard extension of a (total) order is a (total) order.
Now, a slightly more sophisticated example. Every non-empty subset of \mathbb{N} has a minimum. Hence, every internal non-empty subset of \mathbb{N} has a minimum. There are non-internal non-empty subsets of \mathbb{N} with no minimum, but you cannot see them if you live in the nonstandard world as they are not internal. And this explains the slogan.

So far we have explored the similarities between the standard and the nonstandard world, but of course we would like the nonstandard world to have extra properties. For example, we would like ^*\mathbb{R} to contain infinitesimals.
The feature of the nonstandard world that makes this true is called saturation:

Saturation: the intersection of countably many internal sets \{A_i\} is non-empty as long as each intersection A_0\cap \dots \cap A_n is non-empty.

A good way of thinking about this is: if I have countably many conditions so that every finite collection of them can be satisfied, then all of them can be satisfied.
The existence of infinitesimals follows applying saturation to the internal sets (0,1/i).

I would like to conclude with an example where these concepts are applied, except that the authors didn’t use the language of nonstandard analysis directly. When G is a group, we say that G satisfies the law w, where w is a non-trivial word in some alphabet a_1,\dots, a_n, if whenever the letters in w are substituted by elements of G one obtains the identity. For example, abelian groups satisfy the law a_1a_2a_1^{-1}a_2^{-1}, and similarly any solvable group satisfies a law as well.

[Lemma 6.15, Druţu-Sapir] A group G satisfies a law if and only if ^*G does not contain a free group on two generators.

The only if part follows applying transfer: if G satisfies a law then ^*G satisfies the same law. Once again: one cannot distinguish G from ^*G using internal concepts, and satisfying a given law is indeed an internal concept because in the appropriate sense any given word is built up in finitely many steps (or, if you prefer, contains a finite amount of information).
The converse relies on saturation. There are countably many words, and not satisfying a law means that you can find for each finite collection of words in two variables w_1,\dots, w_n two elements of your group that do not satisfy any of them. There’s a trick here: you should choose two elements not satisfying the law [w_1,[w_2,[\dots[w_{n-1}, w_n]\dots] (that you can assume to be non-trivial). So, you may not find two elements in G that do not satisfy any law, but by saturation you know that you can find them in ^*G. And those freely generate a free-group.

Final remarks for those of you who know about ultrafilters. The proofs of the lemma above using ultrafilters and using nonstandard analysis are the same, but somehow the key concept used is saturation, which fits more naturally in the nonstandard setting… Also, if you use the ultrafilters language you have to re-prove saturation every time you use it…

Posted in nonstandard analysis | Leave a comment