Insight into Fast Style Transfer in TensorFlow

Jan. 21, 2018, 3:50 a.m. By: Vishakha Jha

Fast Style Transfer

Photographs are the best way to provide us with diverse ways to look at things. Sometimes editing and adding some styles to a normal picture turns into something rare and exquisite. With the same purpose of achieving something new, a concept of Fast Style Transfer that works with the tensorflow has been recently worked upon. It is a faster and better implementation of Neural Style that combines the description of one image with the style of another image through convolution neural networks. Fast Style transfer add styles and different variations from famous paintings to any picture in a fraction of a second making it a whole new piece of art.

The implementation is based on of the combination of Gatys' A Neural Algorithm of Artistic Style, Johnson's Perceptual Losses for Real-Time Style Transfer and Super-Resolution, and Ulyanov's Instance Normalization.

Neural Algorithm of Artistic Style introduces us to an artificial system based on a Deep Neural Network which works towards the formation of artistic images of high perceptual quality. To segregate and collaborate the content and style of random images neural representations are used providing a neural algorithm for the creation of artistic images.

Another significant concept is Johnson's Perceptual Losses which focuses on Image Transformation problems that are dealt through feed-forward convolutional neural networks using a per-pixel loss. Parallel work shows that by defining and optimizing first-rate images can be generated. When we combine both the approaches the acquired results on image style transfer, in the case of a feed-forward network is more trained to solve the optimization problem proposed by Gatys et al. in real-time.

Ulyanov's Instance Normalization focuses on how a small alteration in stylization architecture can turn into significant changes with a lot of improvement. The modulation is limited to switching batch normalization with instance normalization, and to apply both at testing and training times. The resulting method will be applied to instruct high-performance architectures for real-time image generation.

Fast Style Transfer

While implementation of Fast Style Transfer a loss function is quiet similar to the one described in Gatys, using VGG19 rather than VGG16 and specifically using "shallower" layers than in Johnson's implementation. Pragmatically, this leads us to larger scale style features in transformations.

For successful execution of Fast Transfer Style, certain major requirements include- TensorFlow 0.11.0, Python 2.7.9, Pillow 3.4.2, scipy 0.18.1, numpy 1.11.2 and FFmpeg 3.1.3 to stylize video. To train a new style transfer network we may use, and to undergo all the possible parameters we will have to execute python To evaluate style training network we may work on and the evaluation on Maxwell Titan X takes on an average 100 ms per frame. To work on video styling we use that transfers style into a video.

It is amazing how effortlessly one can implement the style of a painting to an image. Some of the photos look like the actual masterpiece! Rather indeed they are works of art - but the artist is no longer a human. It is no surprise that neural networks are at the heart of this capability but there is a lot of scope for further improvement and enhancement in the similar work field.

More Information: GitHub

Fast Style Transfer in TensorFlow: Video Demo

Video Source: Logan Engstrom