NNABLA: Neural Network Libraries by the Tokyo-based Sony is an open-sourced deep learning framework that is intended to be used for research, development as well as production. The aim here is to get it to be running everywhere may it be from desktop PCs, HPC clusters, embedded devices or to production servers.
Neural Network Libraries: CUDA extension: An extension library of Neural Network Libraries that enables the users to speed-up the computation on GPUs that are CUDA-capable.
Neural Network Libraries: Examples: From basic to state-of-the-art, working examples of Neural Network Libraries.
Neural Network Console: For the development of neural network, a Windows GUI app.
Keeping in mind modularity and extensibility, the framework was designed. Community contributors can add a new operator or optimizer module of neural networks, and for a specific target device as an extension a specialized implementation of neural network modules.
The code is written primarily in C++11, on the backend. If one wishes to run it on embedded devices, you can access it natively via C++11, otherwise, you can use Python API as well. While the C++ API is suitable for deploying inference or training algorithms into embedded systems and servers, the Python API is more suitable for fast prototyping as well as the experimentation of deep learning systems.
Now, How will it be helpful?
Making the use of these core libraries free of charge software engineers and designers will now be able to develop deep learning programs and further incorporate them into their products and services. Also, their shift to open source the framework is also intended to enable the development community to further build on the core libraries programs.
Some of Sony's products that use neural networks include image recognition, gesture sensitivity and real estate price estimates. And the libraries are designed with taking into consideration the following:
High speed: These core libraries at the highest available speeds can carry out neural network learning and execution, allowing for deep learning supported tech development with lower iteration time.
Memory efficiency: Sony also provides a layer of interface functions in Python that in the field of deep learning development is a prevailing programming language which allows for high-efficiency development as well as easy prototyping. Because this allows for intuitive neural network design with fewer lines of code, and, therefore, allows developers while developing technology that uses deep learning to focus on creating neural networks more efficiently, in less time, and at a lower cost.
Moving Onto All The Features That Come With It:
Easy, flexible and expressive:
- The Python API built on the Neural Network Libraries C++11 core gives you flexibility and productivity.
Portable as well as multi-platform:
The use of Python API can be made on both Linux as well as Windows
Most of the library code is written in C++11, deployable to embedded devices
Addition of new modules like neural network operators and optimizers is easy.
The library also allows developers to make the addition of specialized implementations. For example, as a backend extension, you are provided with CUDA, which as a result by GPU accelerated computation gives speed-up.
High speed on a single CUDA GPU
Memory optimization engine
Support of Multiple GPU's
Write less do more:
- These Neural Network Libraries gives you the ability to define a computation graph (neural network) intuitively with less amount of code.
Dynamic Computation Graph Support:
- Static computation graph is basically a method that is used to build a computation graph before the execution of the graph which has been used in general. On the other hand, Dynamic computation graph enables flexible runtime network construction. As an added advantage, this Library can make the use of both these paradigms of static as well as the dynamic graph.
For More Information: GitHub