Learning representations of symbolic data and multi-relational data has turned out to be a central paradigm in artificial intelligence and machine learning. The embedding is done with the motive of organizing symbolic objects in such a way that their resemblance in the embedding space should express their semantic or functional similarity. The task of word embeddings has gained a lot of importance in machine translation to sentiment analysis through WORD2VEC, GLOVE, and FASTTEXT. The graph embedding has crucial applications in the field of link detection and community detection embedding methods such as latent space embeddings, NODE2VEC, and DEEPWALK are widely used. The embedding method has been really successful but they have certain drawbacks which include their competence to the model complex pattern which is intrinsically bounded by the dimensionality of the embedding space.
For learning and generalization, it is important to increase the representation capacity of embedding methods and to model complex patterns on a large scale, which helps in expressing information completely. Representation learning is a concept of machine learning according to which it is a set of techniques that enable a system to automatically uncover the representations required for classification or feature detection from raw data. It has become an extremely useful way of learning and information gathering from symbolic or hidden data such as text and graphs. For this, we take some class of symbolic data that is large datasets whose objects can be organized according to a latent hierarchy – a property that is inherent in many complex datasets.
To understand the structural property for learning in an improved manner we have moved embedding from Euclidean to hyperbolic space. Hyperbolic space can be said to have a continuous version of trees and a constant negative curvature. It has been seen that any finite tree can be embedded into a finite hyperbolic space. We base our approach on a particular model of hyperbolic space that is well-suited for gradient-based optimization. So, we use the Poincaré ball model. This helps in developing an efficient algorithm for computing the embeddings based on Riemannian optimization. With this algorithm, we show that our approach can provide high-quality embeddings of large taxonomies – both with and without missing data. Poincaré embeddings can outperform Euclidean embeddings significantly on data with latent hierarchies, both in terms of representation capacity and in terms of generalization ability. We also prove that in low dimensions Poincaré embeddings are successful in predicting links in graphs.
Thus, hyperbolic space is far more appropriate than Euclidean space to represent a hierarchical structure. Poincaré embeddings automatically capture hierarchical structure from data and Riemannian SGD provides the way to optimize Poincaré embeddings. The basic requirement for Poincaré includes Python 3 with NumPy, PyTorch, Scikit-Learn, and NLTK (to generate the WordNet data). With so many advancements we expect that a full Riemannian optimization approach can further increase the quality of the embeddings and lead to faster convergence.
Read the full Poincaré Embedding paper here.
Github: Poincare Embeddings