Neural Style Transfer (NST) is an algorithm that creates an image by combining the stylistic features of a piece of artwork with the content features of a photograph. The defining characteristic of NST which sets it apart from other image stylization techniques is the use of Deep Neural Networks trained for image recognition. First derived in 2016 by Leon Gatys and Matthias Bethge, the algorithm uses a single neural network to extract and recombine the content of one image with the artistic style of another. Now a field unto itself, NST has shown an uncanny ability to produce approximations to human artistic style. However, the exact process in which style is represented in the model is highly unintuitive and is still not well understood. Moreover, there is little reasoning given towards the choice of network architecture, size or training environment. This leads to questions like: can any network designed for image recognition preform NST? Will the network's depth or training set have an affect on NST? To explore these questions this thesis presents several experiments on Neural Style Transfer using networks that have been trained on small image recognition tasks. In this simplified setting we assess various aspects of the NST algorithm with these primitive networks. The results of our experiments show that certain architectures do not have the capacity for NST while other networks can produce NST-like results but suffer in visual quality.
Copyright is held by the author.
This thesis may be printed or downloaded for non-commercial research and scholarly purposes.
Supervisor or Senior Supervisor
Thesis advisor: Tupper, Paul
Member of collection