IBM Research Toolkit For Deep Learning Security: Adversarial Robustness Toolbox (ART v0.1) Open-Sourced!
Recent years in the development of artificial intelligence have seen tremendous advances. Modern AI systems on cognitive tasks achieve human-level performance such as recognizing objects in images, annotating videos, converting speech to text etc. Many of these results are simply based on Deep Neural Networks (DNNs).
What are DNN's?
They are complex machine learning models that bear certain similarity with the interconnected neurons in the human brain.
A property of DNNs that is very intriguing is that, while they are highly accurate normally, they to so-called adversarial examples are quite vulnerable. Adversarial examples are inputs which in order to produce the desired response by a DNN have deliberately been modified.
Adversarial attacks in security critical applications pose a real threat to the deployment of AI systems.
Virtually undetectable alterations of images, speech, video, and other data have been crafted in order to confuse AI systems. Such alterations can be crafted even if the attacker doesn’t have exact knowledge of the architecture of the DNN or access to its parameters. To add to it, adversarial attacks can also be put forth in the physical world: as adversaries could by wearing specially designed glasses, or many other ways evade face recognition systems instead of just simply manipulating the pixels of a digital image.
IBM Research, regarding the same context, is releasing an open-source software library in python, The Adversarial Robustness Toolbox, to support in defending DNNs against adversarial attacks both researchers as well as developers and therefore making the security of AI systems more stable.
According to sources, "The release will be announced at the RSA conference by Dr. Sridhar Muppidi, IBM Fellow, VP and CTO IBM Security, and Koos Lodewijkx, Vice President and CTO of Security Operations and Response (SOAR), IBM Security."
Moving to the Library:
This is a library written in python that is dedicated to adversarial machine learning. Its purpose is to allow for machine learning models, rapid crafting and analysis of attacks and defence methods. The Adversarial Robustness Toolbox for attacking and defending classifiers has a provision of an implementation for many state-of-the-art methods.
For developers, the library using individual methods as building blocks provides interfaces which support the composition of comprehensive defence systems.
The Approach For Defending DNN's:
For defending DNNs, The approach is three-fold as follows:
Measuring model robustness: First of all, the robustness of a given DNN can be assessed. A simple way of doing this is to, on adversarially altered inputs, record the loss of accuracy. Other approaches measure how there is a variation between the internal representations and the output of a DNN when small changes are applied to its inputs.
Model hardening. Next, a DNN can be “hardened” in order to make it more robust against adversarial inputs. Common approaches include the preprocess of the inputs of a DNN, augmenting the training data with adversarial examples, or in order to prevent adversarial signals from propagating through the internal representation layers changing the DNN architecture.
Runtime detection. Lastly, the application of runtime detection methods can be done to flag any inputs that an adversary might have tempered with. Those methods typically try to exploit abnormal activations caused by the adversarial inputs in the internal representation layers of a DNN.
How To Get Started:
To get started with the Adversarial Robustness Toolbox, you can go through the open-source release under the link given below. To help researchers and developers get quickly started, the release includes extensive documentation and tutorials. A white paper is in preparation that outlines the details of the methods implemented in the library.
DNNs implemented in the TensorFlow and Keras deep learning frameworks are supported by this first release of the Adversarial Robustness Toolbox. The support to other popular frameworks such as PyTorch or MXNet will be extended in the future releases.
What is the main aim?
As an open-source project, the aim of the Adversarial Robustness Toolbox is to both from industry and academia create a vibrant ecosystem of contributors. When it comes to similar ongoing efforts, the main focus here is on defence methods, and on the composability of practical defence systems.
The Adversarial Robustness Toolbox project is hoped to help around adversarial robustness of DNNs to stimulate research and development and in real-world applications advance the deployment of secure AI.
Source And Information: GitHub