AdaNet - Fast and flexible AutoML with learning guarantees.

Nov. 12, 2018, 11:57 a.m. By: Kirti Bakshi


AdaNet is a lightweight and scalable TensorFlow AutoML framework that makes the use of the AdaNet algorithm for training and deploying adaptive neural networks.

AdaNet provides a general framework for not only learning a neural network architecture, but also for learning to ensemble to obtain even better models.

Why AdaNet?

AdaNet that is built on the recent reinforcement learning and evolutionary-based AutoML efforts is easy to use, and creates high-quality models, saving ML practitioners the time spent selecting optimal neural network architectures or implementing an adaptive algorithm for learning a neural architecture as an ensemble of sub-networks.

AdaNet is also capable of adding sub-networks of different depths and widths to create a diverse ensemble, and with the number of parameters trade off performance improvement.

What are the key benefits if AdaNet?

The key benefits of AdaNet are:

Fast and Easy to Use:

AdaNet implements the TensorFlow Estimator interface, which by encapsulating training, evaluation, prediction and export for serving greatly simplifies machine learning programming. It integrates with open-source tools like TensorFlow Hub modules, TensorFlow Model Analysis, and Google Cloud’s Hyperparameter Tuner.

Distributed training support significantly reduces training time, and scales linearly with available CPUs and accelerators.

Learning Guarantees:

Building an ensemble of neural networks has several challenges. While complex sub-networks with more parameters will tend to perform better on the training set, they due to their greater complexity may not generalize to unseen data. These challenges stem from evaluating model performance. One could evaluate performance on a hold-out set split from the training set, but doing so would reduce the number of examples one can use for training the neural network.

Instead, AdaNet’s approach is to optimize an objective that balances the trade-offs between the ensemble’s performance on the training set and its ability to generalize to unseen data. The intuition is for the ensemble to include a candidate sub-network only when it improves the ensemble’s training loss more than it affects its ability to generalize.

It is Extensible:

The key to making a useful AutoML framework for both research and production use is to not only provide sensible defaults, but to also allow users to try their own subnetwork/model definitions. As a result, machine learning researchers, practitioners, and enthusiasts can now define their own AdaNet adanet.subnetwork.Builder making the use of high level TensorFlow APIs like tf.layers.

AdaNet, as of now is an ongoing research project, and welcomes all contributions. It will also be exciting to see how AdaNet can help the research community.

For More Information: GitHub