Recently Google released an academic paper which provides guidance for creating a single machine learning model which can address multiple tasks and called it "One Model to Learn Them All". It is said to be a scheme for improving and heading towards future of machine learning and to turn itself into one big neural network. MultiModel can turn out to be a great initiative towards improving the overall performance of machine learning system and is well disciplined for a variety of task in the field of speech recognition, image recognition, language parsing and object detection.
The paper could provide a basis for the development of future machine learning systems that are suitable and more accurate. It is a model that yields good results on a number of problems spanning multiple domains. Its architecture includes building blocks from multiple domains and contains convolutional layers, an attention mechanism, and sparsely-gated layers. Although, Google doesn't confirm to have a master algorithm that can learn everything at once. But the network includes a system that is adaptable to address different challenges and problems, along with systems that help to direct input to those expert algorithms.
The research shows that the Google's approach could be useful for future development of similar systems that address different domains but it still requires more testing and needs to improve its efficiency. The results need to be checked on account of how well this study generalises and work for other fields. It also hasn't been verified yet but the project has a lot of future scope. Google Brain team have released this MultiModel code as a part of TenserFlow open source project, so it is easily available and could be experimented by other people also. This project is expected to work for a broader prospect and would open new doors in the field of machine learning.
Image Source: pbs.twimg.com