Deep Image Prior  Image restoration with neural networks but without learning
Deep convolutional networks have become a popular tool for image generation and restoration. Generally, their excellent performance is imputed to their ability to learn realistic image priors from a large number of example images. In this paper, on the contrary, it is shown that, to capture a lot of lowlevel image statistics prior to any learning the structure of a generator network is sufficient. The paper, in order to do so, shows that a neural network that has been randomlyinitialized can be used as a handcrafted prior with excellent results in standard inverse problems such as superresolution, denoising, and in painting.
The same prior, furthermore, can be used to invert deep neural representations in order to diagnose them and to restore images that are based on flashno flash input pairs. Apart from its diverse applications, the approach highlights the inductive bias that has been captured by standard generator network architectures. It also bridges the gap between two very popular families of image restoration methods that are mentioned below:

Learningbased methods using deep convolutional networks,

Learningfree methods based on handcrafted image priors such as selfsimilarity.
Goal:
Understand the prior that the neural network models impact on the performed experiments.
Introduction:
Deep convolutional neural networks (ConvNets) as we know, currently set the stateoftheart in inverse image reconstruction problems such as denoising or singleimage superresolution. ConvNets have also been used with great success in more “exotic” problems such as reconstructing an image from its activations within certain deep networks or from its HOG descriptor. More generally, ConvNets with similar architectures are in fact nowadays used to generate images using such approaches as generative adversarial networks(GAN), variational auto encoders, and direct pixelwise error minimization.
Stateoftheart ConvNets for image restoration and generation are almost invariably trained on large datasets of images. One may thus assume that their excellent performance 1 is due to their ability to learn realistic image priors from data. However, learning alone is not as sufficient to explain the good performance of deep networks.
In this work, it is shown that, contrary to all the expectations, a lot of image statistics are captured by the structure of a convolutional image generator rather than by any learned capability. For the statistics that are required to solve various image restoration problems, and where the image prior is required to integrate information that has been lost in the degradation processes this is particularly true. To show this, there is the application of untrained ConvNets to the solution of several such problems.
So they fit a generator network to a single degraded image Instead of the common paradigm of training a ConvNet on a large dataset of example images that have been followed. In this scheme, the network weights serve as a parameterization of the restored image. The weights are randomly initialized and fitted to maximize their likelihood given a specific degraded image and a taskdependent observation model.
The paper also shows that this very simple formulation is very competitive for standard image processing problems such as denoising, in painting and superresolution. This is particularly remarkable because no aspect of the network is learned from data; instead, the weights of the network are always randomly initialized, so that the only prior information is in the structure of the network itself. To the best of their knowledge, this is the first study that directly investigates the prior captured by deep convolutional generative networks independently of learning the network parameters from images.
In addition to standard image restoration tasks, the paper also shows an application of their technique to understanding the information contained within the activations of deep neural networks.
In this paper, there, in short, is an investigation of the prior implicitly captured before any of its parameters are learned by the choice of a particular generator network structure.
Applications:

Denoising and generic reconstruction.

Superresolution.

Inpainting.

Natural preimage.

Flashno flash reconstruction.
Related work:

The method is obviously related to image restoration and synthesis methods based on learnable ConvNets and referenced above.

At the same time, it is as much related to an alternative group of restoration methods that avoid training on the holdout set.
For More Information: GitHub
Link To The PDF: Click Here