The project is explained in the following article
It shows how reuse the feature extractor of a model trained for object detection (or other tasks) in a new model designed for style transfer.
VGG-16, the feature extractor of SSD300 model (from a previous repository), is used to achieve style transfer with a combination of style and content losses:
The notebook style_transfer_example.ipynb can be used to run the model plus a style image on image/video content.
The script under utils/ allows to create concatenation of multiple inferences (image or video):