TDLS: Principles of Riemannian Geometry in Neural Networks.

Aug. 20, 2018, 10:43 p.m. By: Kirti Bakshi

TDLS

The study presented in this paper mainly deals with neural networks in the sense of geometric transformations acting on the coordinate representation of the underlying data manifold which the data is sampled from. It forms a part of an attempt to construct a formalized general theory of neural networks in the setting of Riemannian geometry.

From this perspective, the paper puts forward the following theoretical results that have been developed as well as proven for feedforward networks:

  • First, in the paper, as opposed to ordinary networks that are static it is shown that residual neural networks are finite difference approximations to dynamical systems of first order differential equations. This implies that the network is learning systems of differential equations that govern the coordinate transformations that represent the data.

  • Secondly, the paper shows that a closed form solution of the metric tensor on the underlying data manifold can be found by back propagating the coordinate representations learned by the neural network itself.

This is formulated in a formal abstract sense as a sequence of Lie group actions on the metric fibre space in the principal and associated bundles on the data manifold. Toy experiments were run as well in order to confirm parts of the proposed theory, as well as to provide intuitions as to how neural networks operate on data.

Introduction:

The introduction section in this paper has been divided into mainly two parts:

  • The Section 1.1 of the paper makes an attempt to succinctly describe ways in which neural networks are usually understood to operate.

  • Section 1.2 articulates a more minority perspective. It is this minority perspective that this study develops, showing that there exists a rich connection between neural networks and Riemannian geometry.

Numerical experiments:

This section of the paper presents the results of numerical experiments that have been made use of in order to understand the proposed theory:

  • Neural networks with C k differentiable coordinate transformation.

  • Coordinate representations of the data manifold and metric tensor

  • Effect of batch size on set connectedness and topology

  • Effect of the number of layers on the separation process

Conclusion:

This paper, as a branch of Riemannian geometry, forms part of an attempt to construct a formalized general theory of neural networks. In the forward direction, and starting in Cartesian coordinates, the network is learning a sequence of coordinate transformations to find a coordinate representation of the data manifold that well encodes the data, and experimental results suggest this imposes a flatness constraint on the metric tensor in this learned coordinate system.

One can then in order to find its form in Cartesian coordinates, backpropagate the coordinate representation of the metric tensor. This can be made use of to define a −δ relationship between the input and output data.

Coordinate backpropagation was formulated in a formal, abstract sense in terms of Lie Group actions on the metric fibre bundle. The error back propagation algorithm was then formulated in terms of Lie group actions on the frame bundle. For a residual network in the limit, the Lie group acts smoothly along the fibres of the bundles.

Experiments were then conducted in order to confirm as well as better understand the different aspects of this formulation.

For more information and a deeper insight into the paper, refer to the link below:

Link To The PDF: Click Here

TDLS: Principles of Riemannian Geometry in Neural Networks:

Video Source: Amir Feizpour