Intel develops self-learning chip which will accelerate Artificial Intelligence
Intel has been trying to develop the high-end chip to meet the demands for the next generation technologies like Augmented reality, Virtual Reality, and Artificial Intelligence. If we clearly analyze today's gadgets we will realize that most of them use machine learning and artificial intelligence. So, Intel has come with the next generation chipset which will be known as “Loihi”. The new chipset does not rely on raw computing horsepower and uses a new technology that is modeled after the human brain.
Intel has been working recently on neuromorphic technology for quite some time now. In this chipset, instead of logic gates spiking neurons are used (which are the fundamental computing unit). These spiking neurons can pass the signals of varying strength (similar to neurons in our brain). They need not be controlled by a clock like a regular processor instead could be fired when needed.
Loihi (latest chip developed by Intel) has 1024 artificial neurons (more than 130 million possible synaptic connections). Yes, it is still very less when compared to our human brain (which has more than 80 billion neurons). This chip work quite similar to human brain by relaying information with pulses, strengthening frequent connections and storing the changes locally at the interconnections.
According to Intel, this new chipset is expected to significantly speed up the machines and in turn, reduce the power requirements by more than 1,000 times. Another added advantage is that all the learning can be done on-chip only. It which means that this chip can learn by itself rather than requiring a dataset to train the machine. Just think that how easy it would become to build a robot with similar thinking capabilities of humans. This chip also has lots of potential in the field of industrial and automotive industries.
Intel is not the only company to come with this idea instead IBM has also developed a neuromorphic chip called "TrueNorth” which has 4096 processors which can simulate more than 256 million synapses. But all is not good with this latest technological development. Some researchers are arguing that this chip cannot be used in all kinds of the deep learning model. Now, it remains to be seen how Intel is going to use this chip in a real-world environment.