Scientists uses ‘Hashing’ to reduce the computation required for Deep Learning

June 11, 2017, 3:19 a.m. By: Pranjal Kumar

Ryan Spring and Anshumali Shrivastava

The scientists from Rice University has developed a technique for rapid data lookup. This will reduce the amount of computation required and thus reducing the time and energy required.

The technique highly scalable and could be applied to any big deep learning architecture. The research will be presented in August as KDD 2017 conference in Halifax.

According to one of the scientist,” All the big companies in the world are using deep learning methods. But they have to deal with a large amount of computation required. This technology would reduce the computation by a lot.

The scientist has developed the technique based on ‘hashing’ to reduce the number of computation required. In hashing, the smart hash function is used to convert data into small numbers called hash. These hashes are stored in tabular form.

This technique is a mixture of two popularly used techniques sparse backpropagation and a variant of locality sensitive hashing.

According to the researchers, in the small-scale test, the computation was reduced by 95%. The accuracy was still within 1% of the accuracy obtained by standard methods.

The current deep learning technique uses the neural network to train the machine. The high computation requires a large number of neurons. But, adding more neurons increases its expressive power.

Google is itself trying to train one machine which has 137 billion neurons.

Srivastava, one of the researcher said,” This technology will act better if implemented in large scale project as it will save a massive amount of time.”

Image Source: Phys