### Introduction to hyperbolic geometry and hyperbolic embeddings

#### by Emin Orhan

This week, I gave a tutorial on hyperbolic geometry and discussed this paper that proposes embedding symbolic data in a hyperbolic space, rather than in Euclidean space. In this post, I will try to summarize my presentation. I closely followed the book *Hyberbolic Geometry* by James W. Anderson in my presentation, which is an extremely well-written introductory textbook on the subject. I highly recommend this book to anyone who is interested in learning about hyperbolic geometry.

**Euclid**

Euclid axiomatized plane geometry more than two millennia ago. He came up with five axioms from which all of plane geometry could be derived:

- Given two points in the plane, a unique line can be drawn passing through those points.
- Any line segment can be extended indefinitely in either direction.
- Given any line segment, a circle can be drawn with its center on one of the end-points of the line segment and its radius equal to the length of the line segment.
- Any two right angles are congruent.
- Parallel postulate: “If a straight line falling on two straight lines make the interior angles on the same side less than two right angles, the two straight lines, if produced indefinitely, meet on that side on which are the angles less than the two right angles.”

There are equivalent formulations of the parallel postulate, perhaps the most famous of which is the Playfair’s axiom: “Given a line and a point not on it, at most one line parallel to the given line can be drawn through the point.”

Generations of mathematicians after Euclid thought the parallel postulate suspiciously complicated and hoped that it could somehow be derived from the first four axioms. However, all efforts to do so failed. We now know that the parallel postulate is independent of the remaining axioms of plane geometry, meaning that the axiomatic system consisting of the first four axioms and the *negation* of the parallel postulate is a consistent system. In fact, even ancient Greeks were familiar with a perfectly fine model of a plane geometry that violates the parallel postulate: the surface of the sphere, i.e. . If points are interpreted as points and lines as great circles on the surface of the sphere, the parallel postulate is violated, because any two great circles on the surface of the sphere have to intersect, hence they are not parallel. However, the problem with this model is that it also violates the second axiom: line segments cannot be extended indefinitely, since great circles are closed curves. So, it was not taken seriously as a plausible model of plane geometry.

At the beginning of the 19th century, people started to come up with model geometries that violate the parallel postulate, while satisfying all the remaining axioms of Euclid. Gauss, Bolyai and Lobachevsky were the first mathematicians to come up with such a model. However, we will be dealing mostly with two models introduced later by Poincaré. The first one is called the upper half-plane model.

**The upper half-plane model, **

The upper half-plane is the set of complex numbers with positive imaginary part: . The points in are defined to be the usual Euclidean points, and lines are defined to be either half-lines perpendicular to the real axis, or half-circles with centers on the real axis (image source):

Circles in are Eucliden circles (but not necessarily with the same center or radius). As an exercise, you can check that when points, lines and circles in are interpreted in this way, all axioms of Euclid are satisfied except for the parallel postulate: show that in there is an infinite number of parallel lines to a given line passing through a given point not on the line.

This model, by itself, does not come equipped with a metric. We have to define one for it. It turns out that there is a fairly natural way of assigning a metric to . The idea will be to first find a group of transformations in that leave hyperbolic lines invariant, i.e. that map hyperbolic lines to hyperbolic lines in . We will then require our metric to be invariant under these transformations. It turns out that this requirement uniquely determines the metric (up to a trivial scale factor).

**The group **

The upper half-plane model is thought of as embedded in the extended complex plane, , consisting of the complex plane and a single point at infinity.

In , we define a circle to be either a Euclidean circle in or a Euclidean line in plus the point at infinity, . We denote the group of homeomorphisms of taking circles in to circles in by . Our first job is to exactly characterize this group. For this purpose, we first define the Möbius transformations.

**Definition:** A Möbius transformation is a function of the following form:

where and .

These transformations correspond to compositions of simple translations, rotations, dilations and inversions in the complex plane. Here’s a nice video visualizing how these transformations deform a square in the complex plane:

The set of all Möbius transformations form a group denoted by .

**Definition:** The group generated by and complex conjugation, , is called the general Möbius group and denoted by .

A fundamental result then tells us that the general Möbius group is precisely the group of homeomorphisms of taking circles to circles in , i.e. :

**Theorem:** .

Another important property of is that elements of are conformal homeomorphisms, i.e. they preserve angles.

We will concentrate on a particular subgroup of that preserves the upper half-plane:

**Definition:** .

It can be shown that every element of takes hyperbolic lines to hyperbolic lines in . In this precise sense, we can think of as some sort of a generator for the upper half-plane model.

We can express every element of explicitly either as:

where and

or as:

where are purely imaginary and, again, .

**Length and distance in **

We are now ready to define a line element and a distance metric in . As mentioned above, we will do this by requiring that whatever line element and distance metric we choose for should be invariant under the action of the group . It turns out that this requirement uniquely determines the line element and the distance metric (up to a trivial scalar factor).

Let’s first review a few basic definitions. Let define a path in the Euclidean plane. The Euclidean length of is defined as:

or if the Euclidean plane is considered as the complex plane, this can be equivalently expressed as:

where is called the Euclidean line element. A more general line element can be defined by scaling the Euclidean line element by a scale factor, , that could depend on : i.e. . We define the length of with respect to this new line element as:

As mentioned above, our strategy will be to determine a for by requiring the lengths of all paths in to be preserved under the action of the group , i.e. for all and all . This requirement results in (up to a constant multiplicative factor):

Therefore, the hyperbolic length of a path in can be measured as:

**Example:** Consider defined in the interval . Then, . We see that the length blows up as .

Now, we can define a distance metric in based on our definition of The idea is to find a that maps the hyperbolic line passing through and to the imaginary axis so that and for some and (it is easy to show that we can always find such a transformation and that, if there are multiple such transformations, the metric defined below does not depend on which one is chosen). From the example above, we know how to measure the lengths of line segments on the imaginary axis.

When we do this, we find the following formula for measuring the hyperbolic distance between two arbitrary points :

where and are the coordinates of and , respectively, and and the center and the radius of the hyperbolic line passing through and .

One can show that the group corresponds exactly to the group of isometries of : i.e. .

One can also show that there are two qualitatively distinct types of parallel lines in : parallel lines that asymptote to a common point at infinity and parallel lines that do not asymptote to a common point at infinity (called ultraparallel lines). For the former ones, it can be shown that , whereas for the latter case, . Therefore, ultraparallel lines behave more like parallel lines in Euclidean space.

**Poincaré disk model,
**

Another model of hyperbolic geometry is the Poincaré disk model. This model is defined inside the unit disk in : A hyperbolic line in is the image under of a hyperbolic line in , where is an element of This implies that hyperbolic lines in are Euclidean arcs perpendicular to the unit circle (image source):

Now, we can do the same things with this model that we did with the upper half-plane model, namely find a scaling function that makes the lengths of paths invariant under the group , then define lengths and distances based on this . When we do that, we get the following expressions:

**Example:** Consider , . Then, . Again, we see that the length blows up as .

**Example:** It can be shown that the length of a hyperbolic circle in with hyperbolic center 0 and hyperbolic radius is .

This line element leads to the following distance metric in , :

This same formula holds in higher dimensional versions of the Poincaré disk model as well.

**Hyperbolic Area**

Similarly to the length of a path in , we define the area of a set by:

Just like length and distance, is also invariant under the action of the group .

In the Poincaré disk model, area is defined as (using Cartesian coordinates):

It can be shown that the area of a hyperbolic disk in with center 0 and radius is . This implies that for a disk with center 0 and radius , the ratio as . This is very unlike the Euclidean case, where as . Intuitively, lengths become as big as areas for sufficiently large shapes in hyperbolic space (similar results hold for higher dimensional hyperbolic spaces).

**Gauss-Bonnet**

This is a truly amazing result that says that for a hyperbolic triangle with interior angles , the area is given by .

In fact, this result generalizes to hyperbolic polygons: if is a hyperbolic polygon with interior angles , then the area of is given by:

Another big difference from the Euclidean case is that in the Euclidean plane, there is only one regular -gon (up to translation, rotation and dilation) and its interior angle is . The situation is completely different in the hyperbolic plane: for all and each , there is a compact regular hyperbolic -gon whose interior angle is .

**Higher-dimensional hyperbolic spaces**

The two-dimensional models already make apparent many strange features of hyperbolic spaces. Things can get even stranger in higher dimensional hyperbolic spaces. However, instead of studying these spaces formally, we can try to get a feel for the strange properties of higher dimensional hyperbolic spaces by exploring the three-dimensional hyperbolic space, . Here‘s a software tool for exploring . You can find some background knowledge and information about how to use this tool here. The tool here models the {4,3,6}-tessellation of (the notation here means the shape of the cells tiling the space is a cube, whose Schläfli symbol is {4,3}, and there are six of these cells around each edge). This is not the only possible tessellation of ; in fact, it is not even the only possible cubic tessellation of (this is again a drastic difference from the Euclidean 3-space where there is essentially only one cubic tessellation of the space). There’s, in fact, an infinite number of such tessellations (a fact closely related to the result mentioned above that says that there are regular hyperbolic -gons with any interior angle as long as ). I strongly encourage you to explore these different tessellations of on Wikipedia, which has amazing visualizations of these tessellations.

**Hyperbolic embeddings**

Finally, I would like to briefly discuss this paper by Nickel and Kiela that proposes embedding data (especially symbolic or graph-structured data) in hyperbolic spaces (see this previous post of mine for a brief review of why one might want to use embeddings in continuous spaces to represent symbolic data). The idea is really simple: we first define an objective that encourages semantically or functionally related items to have closer representations (and conversely semantically or functionally unrelated items to have more distant representations) in the embedding space, where the embedding space is a hyperbolic space (for example, the Poincaré ball model, i.e. the higher dimensional version of the Poincaré disk model reviewed above) and distance is measured by hyperbolic distance. For instance, one of the objectives they use in the paper is the following loss function:

where denotes the set of semantically or functionally related items that are connected by an edge in the dataset, denotes the set of items unrelated to , and denotes the hyperbolic distance between the embeddings of the items in the Poincaré ball model.

We then just do gradient descent on this loss function to optimize the positions of the items in the embedding space. There are two subtleties to take care of while doing gradient descent: first, because we are working with the Poincaré ball model, we have to make sure that the positions stay within the unit ball during optimization. This can be achieved by applying a projection operator after each update:

where ensures numerical stability.

Secondly, because we are working in a curved space, taking its intrinsic curvature into account during optimization would be more efficient. This means that we have to use the Riemannian gradient rather than the Euclidean gradient for gradient descent. These two gradients are related to each other by the inverse of the metric, which is itself given by the square of the line element scaling factor described above. This then leads to the following update rule:

Here, is the learning rate, is the Euclidean gradient and the term is just the inverse of the metric, which in turn is just the square of the scaling factor derived above for the Poincaré disk model.

The authors show that hyperbolic embeddings lead to better performance in reconstruction and link prediction tasks, usually with a few orders of magnitude smaller number of embedding dimensions than a Euclidean embedding! That’s an impressive improvement. Intuitively, hyperbolic spaces of a given dimension contain a lot “more space” than the corresponding Euclidean space, so we need a lot fewer dimensions to accurately capture the semantic or functional relationships between the items. That said, I have a few concerns about hyperbolic embeddings:

- Although it may be true that hyperbolic spaces are better suited than Euclidean spaces to capture the semantic or functional relationships in tree-structured data, there is, in general, no reason to assume
*a priori*that the underlying space has to be*exactly*hyperbolic. I think a better approach would be to learn the appropriate metric, rather than assuming it. One can still initialize it close to a hyperbolic metric, but making it flexible would allow us to capture possible deviations from hyperbolicity. - In the paper, they use relatively simple tasks to demonstrate the advantages of hyperbolic embeddings. In practice, one may want to use these embeddings as inputs to some deep and large model being trained on a more challenging task. One may also want to learn such hyperbolic embeddings jointly with the parameters of the deep model in an end-to-end fashion. I’m not quite sure how well hyperbolic embeddings would behave in such a context. The fact that distances blow up very rapidly in hyperbolic spaces as one approaches the boundary suggests that such end-to-end training of a deep and large model on hyperbolic embeddings may be riddled with serious optimization difficulties. Again, making the metric flexible could force the model to find and use a more “learnable” metric.