Deepdreaming

For the Google software, see DeepDream.
An image undergoing the deepdreaming process

Deep dreaming refers to the generation of images that desired activations in a trained deep network. For example, given a network that is trained to recognize cats (among other things), a dreamed cat image can be synthesized by performing a Gradient descent optimization adjusting a random image such that when it is fed forward through the trained network, it produces the "this is a cat" output. The optimization resembles Backpropagation, however instead of adjusting the network weights, the weights are held fixed and the input is adjusted.

Alternately, an existing image can be altered so that it is more cat-like, and the resulting enhanced image can be again input to the procedure.[1] This usage resembles the childhood activity of looking for animals or other patterns in clouds.

The dreaming idea and name became popular on the internet in 2015 thanks to Google's DeepDream program. The idea dates from early in the history of neural networks ,[2] and was explored more recently (but prior to Google's work) by several research groups [3] .[4]

Applying gradient descent independently to each pixel of the input produces images in which adjacent pixels have little relation and thus the image has too much high frequency information. The generated images can be greatly improved by including a prior or regularizer that prefers inputs that have natural image statistics (without a preference for any particular image), or are simply smooth [4] [5] .[6] For example,[5] used the total variation regularizer that prefers images that are piecewise constant. Various regularizers are discussed further in.[6]

The dreaming idea can be applied to hidden (internal) neurons other than those in the output, which allows exploration of the roles and representations of various parts of the network.[6] It is also possible to optimize the input to satisfy either a single neuron (this usage is sometimes called Activity Maximization)[7] or an entire layer of neurons.

While dreaming is most often used for visualizing networks or producing computer art, it has recently been proposed that adding "dreamed" inputs to the training set can improve training times .[8]

References

  1. Mordvintsev, Alexander; Olah, Christopher; Tyka, Mike (2015). "Inceptionism: Going Deeper into Neural Networks". Google Research. Archived from the original on 2015-07-03.
  2. Lewis, J.P. (1988). Creation by refinement: a creativity paradigm for gradient descent learning networks. IEEE International Conference on Neural Networks. doi:10.1109/ICNN.1988.23933.
  3. Erhan, Dumitru. (2009). Visualizing Higher-Layer Features of a Deep Network. International Conference on Machine Learning Workshop on Learning Feature Hierarchies.
  4. 1 2 Simonyan, Karen; Vedaldi, Andrea; Zisserman, Andrew (2014). Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps. International Conference on Learning Representations Workshop.
  5. 1 2 Mahendran, Aravindh; Vedaldi, Andrea (2015). Understanding Deep Image Representations by Inverting Them. IEEE Conference on Computer Vision and Pattern Recognition. doi:10.1109/CVPR.2015.7299155.
  6. 1 2 3 Yosinski, Jason; Clune, Jeff; Nguyen, Anh; Fuchs, Thomas (2015). Understanding Neural Networks Through Deep Visualization. Deep Learning Workshop, International Conference on Machine Learning (ICML) Deep Learning Workshop.
  7. Nguyen, Anh; Dosovitskiy, Alexey; Yosinski, Jason; Brox, Thomas (2016). Synthesizing the preferred inputs for neurons in neural networks via deep generator networks. arxiv. arXiv:1605.09304Freely accessible.
  8. Arora, Sanjeev; Liang, Yingyu; Tengyu, Ma (2016). Why are deep nets reversible: A simple theory, with implications for training. arxiv. arXiv:1511.05653Freely accessible.
This article is issued from Wikipedia - version of the 10/5/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.